Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A rapid and cost-effective pipeline for digitization of museum specimens with 3D photogrammetry

  • Joshua J. Medina,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Moore Laboratory of Zoology, Occidental College, Los Angeles, CA, United States of America

  • James M. Maley,

    Roles Conceptualization, Data curation, Methodology, Project administration, Supervision, Writing – review & editing

    Affiliation Moore Laboratory of Zoology, Occidental College, Los Angeles, CA, United States of America

  • Siddharth Sannapareddy,

    Roles Data curation, Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing

    Affiliation Moore Laboratory of Zoology, Occidental College, Los Angeles, CA, United States of America

  • Noah N. Medina,

    Roles Investigation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Moore Laboratory of Zoology, Occidental College, Los Angeles, CA, United States of America

  • Cyril M. Gilman,

    Roles Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing

    Affiliation Moore Laboratory of Zoology, Occidental College, Los Angeles, CA, United States of America

  • John E. McCormack

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    mccormack@oxy.edu

    Affiliation Moore Laboratory of Zoology, Occidental College, Los Angeles, CA, United States of America

Abstract

Natural history collections are yielding more information as digitization brings specimen data to researchers, connects specimens across museums, and as new technologies allow for more large-scale data collection. Therefore, a key goal in specimen digitization is developing methods that both increase access and allow for the highest yield of phenomic data. 3D digitization is increasingly popular because it has the potential to meet both aspects of that key goal. However, current methods overlook or do not prioritize some of the most sought-after phenotypic traits, those involving the external appearance of specimens, especially color. Here, we introduce an efficient and cost-effective pipeline for 3D photogrammetry to capture the external appearance of natural history specimens and other museum objects. 3D photogrammetry aligns and compares sets of dozens, hundreds, or even thousands of photos to create 3D models. The hardware set-up requires little physical space and around $3,000 in initial investment, while the software pipeline requires $1,400/year for proprietary software subscriptions (with open-source alternatives). The creation of each 3D model takes 1–2 hours/specimen and much of the software pipeline is automated with minimal supervision required, including the onerous step of mesh processing. We showcase the method by creating 3D models for most of the type specimens in the Moore Laboratory of Zoology bird collection and show that digital bill measurements are comparable to hand-taken measurements. Color data, while not included as part of this pipeline, is easily extractable from the models and one of the most promising areas of data collection. Future advances can adapt the method for ultraviolet reflectance capture and increased efficiency and model quality. Combined with genomic data, phenomic data from 3D models including photogrammetry will open new doors to understanding organismal evolution.

Introduction

Natural history collections are experiencing a renaissance, as new analytical techniques are able to draw more information from each specimen [14], embodied in the concept of the Extended Specimen [5]. At the same time, digitization efforts connect these information-rich specimens across museum collections, allowing for the creation of large-scale biodiversity data sets [68]. Such large-scale biodiversity data have been used recently to show biological responses to climate change [4, 9] and to study broad-scale evolutionary patterns [1012].

A key goal of efforts to connect large-scale biodiversity data across museums is the creation of tools that facilitate mass digitization [13, 14] while also providing the highest quality data, which can later be extracted by researchers [15] or through crowd-sourcing [11]. For the sheer amount of extractable data, it is no surprise that there has long been interest in 3D digitization in the world of natural history collections [16]. Currently, the most common 3D digitization techniques for natural history specimens are laser scanning and computerized tomography (CT) scanning [17, 18]. Laser scanning creates a 3D model through external tracking of the 3D position of a laser sight, whereas CT scanning uses penetrating waves that capture an image in 2D slices, which are then layered into a 3D model. While these methods provide a wealth of new data, neither captures one of the most sought-after features of specimens: full-color external phenotype.

Here, we outline a rapid and cost-effective method for obtaining 3D models of the external features of natural history specimens and other museum objects using digital photogrammetry. Digital photogrammetry (i.e., ‘Structure from Motion’) involves photographing an object from multiple angles, then using software that aligns common landmarks between the photographs to reconstruct a 3D model from sets of dozens, hundreds, or even thousands of photos [19]. First applied to landscapes and geological features [20, 21], 3D photogrammetry then moved to archaeology, paleontology, and cultural heritage sites [2224], and eventually was improved to capture smaller and smaller objects [25, 26].

In natural history collections, insect specimens were the first to see applications of 3D photogrammetry, mostly directed toward type specimens [2729]. Only very recently have a few lineage-specific applications emerged for the study of vertebrates like parrots [30], bats [31], and terrestrial mammals [32, 33]. Another recent development is Beastcam technology, a patented platform to elucidate live-animal motion and functional morphology using multi-camera 3D photogrammetry [34]. Despite this flurry of interest, to date, no large-scale 3D photogrammetry efforts have begun to mass-digitize specimens in natural history collections for the purpose of providing phenomic data for broad-scale evolutionary studies. In part, this is because the hardware and software described so far have been complicated, expensive, use-specific, or requiring considerable investment in staff time.

The 3D photogrammetry method we outlined below is broadly applicable and straightforward, relatively cheap, and largely automated in both its hardware and software, allowing for efficient digitization of biological specimens, even those with moderately complex structures. We show a simple use-case by comparing digital measurements from 3D models to those taken by hand from a series of bird holotypes (i.e., specimens representing the link between scientific names and phenotypes). We compare our pipeline to existing methods and discuss future directions.

Methods

Camera, hardware, and physical set-up

A detailed step-by-step guide to the entire procedure, which takes 1–2 hours per specimen including processing time (Table 1), can be found on Github at https://github.com/JMedina3D/MLZ-Museum-Photogrammetry-Protocol. For digital photography, we use a Sony a7rii camera for its large (42mp) sensor size and a 90mm macro lens for fine details. When capturing larger specimens, the macro lens is alternated with a 15mm wide-angle lens. We use a polarizing filter to remove excessive reflections and harsh shading in the final model.

thumbnail
Table 1. Average processing time for each component of the pipeline.

https://doi.org/10.1371/journal.pone.0236417.t001

For scanning large objects, it can be useful to combine multiple camera distances during image capture: medium-distance shots that frame the object’s form for geometric reconstruction, and close-up or zoomed-in shots for detail and texture. Since our objects were medium-to-small size, we use a high mega-pixel camera, which allows the same photoset to be used for both detail and structure. This cuts the number of photos in half, simplifies camera setup, and eases the process of automation. Additionally, a single camera position minimizes consistency errors in focal length or lighting.

The physical hardware includes a stand for holding the specimen, a turntable, and a matte backdrop. The stand is designed for small to medium-sized bird specimens, with four spokes that can be concealed within the feathers to allow for an unobstructed 360-degree view of the specimen (Fig 1). We place two lights (“softboxes”) on either side of the set-up to ensure even lighting. We use a motorized Comxim turntable that integrates with the shutter release of the camera. To standardize size and color, we place a ruler and an X-Rite Colorchecker below each specimen. The footprint of the entire physical set-up, including the camera and tripod, is 1 m x 1.5 m.

thumbnail
Fig 1. One of the authors with the physical set-up, showing the stand on top of a turntable with shutter integration to a camera on tripod.

The ruler sits just below the specimen. Softbox light sources (one shown) are placed on both sides of the specimen. The specimen shown is the holotype of the Tufted Jay (Cyanocorax dickeyi), MLZ:Bird:12342.

https://doi.org/10.1371/journal.pone.0236417.g001

Image capture

Before image capture, we record the specimen catalog number to a spreadsheet, and we take a photo of the color chart to assess ambient light later during processing. For each photograph, we use an f22 aperture for maximum depth of field, and we drop the ISO completely to avoid noise. For 360-degree image capture, the turntable is set to rotate 3.75 degrees between each photo, for a total of 96 photos taken over 7 minutes. This is repeated 3 times: one set of images is taken level with the specimen and a set is taken angled roughly 45 degrees above and below the specimen.

Image processing

The rest of the process to create a 3D model is carried out by a largely automated process (Fig 2) that requires minimal supervision (e.g., opening software programs, starting batch processing, etc.), except for a small amount of manual mesh processing (see below). For processing the scan data, we use a Windows PC with an intel i7 processor, 16 gigabytes of ram, and a Nvidia GeForce GTX1080 graphics card. The approximate time to completion of each step is listed in Table 2. The 288 photos (96 photos x 3 angles) are imported into Adobe Lightroom for image processing. Exposures and white balances are standardized using the 75% gray color chip, and photos are corrected for distortion and color aberrations using a color profile generated from the standard color chart. Photos are exported as high-quality JPGs. Then, within Adobe Photoshop, masks are created using a batch process that obscures the backdrop and isolates the subject in preparation for the next step: alignment of the 2D images to create a 3D model (image registration). The images are exported from Adobe Photoshop as JPGs with the “_masked” suffix.

thumbnail
Fig 2. Overview of the workflow using a specimen of a rufous-winged tanager (Tangara lavinia), MLZ:Bird:8631.

(a) Physical set-up shown in Fig 1; (b) One of 288 pre-processed RAW photos taken during the subject's rotation on the turntable; (c) Image processing using color chart to standardize lighting and color; (d) Image registration (alignment) where the 3D point cloud is surrounded by aligned camera locations, shown as blue rectangles (only two angles shown); (e) The 3D mesh generation from the point cloud includes extraneous detail, such as the scale bar, which can be removed after scale calibration; (f) Surface topology (shown in wireframe) optimized and UV coordinates (right) mapped onto the model; (g) Import into image registration software to add texture from the aligned photos, using UV coordinates (right).

https://doi.org/10.1371/journal.pone.0236417.g002

Image registration and mesh reconstruction

Images and corresponding masks are then loaded into Reality Capture and aligned, with the result visualized as a high-resolution point cloud with over 200,000 points. The alignment process takes approximately 10–20 minutes (Table 1). For collections use, Reality Capture provides speeds optimal for digitizing larger collections. Other software, such as Agisoft Metashape, might have advantages in other use cases that should be considered, especially for smaller collections [35]. The physical scale is then defined manually by placing markers on the standard ruler using 3–5 of the 288 photos. From the completed point cloud, a mesh is generated at a polycount of 3–6 million triangles and exported as an OBJ file.

Mesh processing (manual)

While most models can be taken directly into procedural processing, we suggest using Pixologic Zbrush when manual quality control is desired. During manual quality control, the stand and other extraneous geometry attached to the specimen (e.g., specimen tag) are erased using ZBrush’s Sculptris tools. The export parameters are changed to export a triangulated mesh in the form of an OBJ file with a “_highpoly” suffix.

Mesh processing (procedural)

After manual mesh processing, the model goes through a series of procedural mesh changes automated using SideFX's Houdini software (see Fig 3 and legend for details). First, the polycount is reduced through decimating the mesh according to a "quality tolerance,” which can be manually ‘painted on’ via a heat map to prioritize important features on the model’s surface [36]. Next, topology defects, such as holes and extraneous scan data are automatically recognized and removed. The mesh is aligned, and a further node can be toggled to retopologize the mesh into quads if so desired, with polygon edge flow dictated by a choice of presets, which can take cues from the “quality tolerance” heatmap previously applied. The mesh is then given UV coordinates generated from Houdini's automatic UV toolkit. In this context, UV does not refer to ultraviolet, but rather to coordinates of texture mapping (i.e., UV in addition to XYZ coordinates). We found Houdini’s automatic UV seam generation to be more efficient than those found in Metashape, or even those in Zbrush or Blender, and can be further tailored by reusing the heatmap initially used for retopology. The processed mesh is then exported as an OBJ file with a “_lowpoly” suffix. This file acts as the most optimized version of the model to texture and upload.

thumbnail
Fig 3. Detailed overview of the procedural mesh processing (Fig 2f), which follows a series of automated steps to process the 'high poly' 3D mesh into a 'low poly' optimized mesh.

(a) Import: The mesh is imported, and its name and filepath is extracted for use during automation. Parameters affecting retopology and UV mapping can be modified during import; (b) Voxelize: To prepare for retopology, the mesh is filtered through a voxel grid that ensures uniformly-sized surface topology. This voxel-mesh is created at a higher resolution than the original, and projected onto the original surface to prevent distortion; (c) Retopology: The model's 'polycount,' or number of surface triangles, is reduced and optimized. This can either be according to predefined angle tolerance, or based on a predefined heatmap selecting areas of interest to be preserved at high-resolution; (d) Clean-up: Holes in the mesh, non-triangular and non-manifold geometry, and other topology errors are located and fixed; (e) UV map: UV coordinates can be created or "unwrapped" using a series of automatic projection methods, depending on the subject's shape. For most birds, we use 8 simultaneous planar projections placed according to the model's bounding box. This method uses Angle-Based flattening; (f) Export: The finished 'low-poly' mesh is exported and renamed according to the initial name and filepath.

https://doi.org/10.1371/journal.pone.0236417.g003

Texture generation and final optimization

The “_lowpoly” model is then imported back into Reality Capture. From here, the aligned photos are used to generate either a 4k or 8k albedo texture map using the previously generated UV coordinates. This texture is exported into the same folder as the finalized “_lowpoly” OBJ file. The model is loaded into a 3D viewer for final inspection of the scale, texture, and placement in the 3D environment. Additional guidelines for file storage and optimization are found on the Smithsonian 3D metadata digitization blog (https://dpo.si.edu/blog/smithsonian-3d-metadata-model) and Morphosource’s data management guidelines [37].

Ethics statement

The individual in Fig 1 has given written informed consent (as outlined in PLOS consent form) to publish this photo.

Results and discussion

We produced a set of 40 3D models representing most of the type specimens in the Moore Laboratory of Zoology (MLZ) bird collection (S1 Table). These models are freely available for viewing and downloading at Sketchfab (https://skfb.ly/6PMr9 & https://skfb.ly/6PMru). A detailed guide for implementing the software pipeline—including code that automates file creation, image masking, and image storage—is available at [link here before publication]. The hardware set-up requires little physical space (1 x 1.5 m) and around $3,000 in initial investment, while the software pipeline requires $1,400/year for proprietary software subscriptions (Table 2). There are open-source alternatives (Table 3), although we have not incorporated them into our current pipeline. When using Reality Capture to process the scan data, the creation of each 3D model takes 1–2 hours/specimen and much of the software pipeline is automated with minimal supervision required.

thumbnail
Table 3. Open-source software alternatives for various steps in the pipeline.

https://doi.org/10.1371/journal.pone.0236417.t003

A comparison of morphometrics from both physical and digital specimens shows that digital measurements from 3D models are comparable to hand-taken measurements. Over 20 specimens, the average difference between digital and hand-taken measurements of bill length was less than 1 mm (= 0.78 mm). The average bill length for these 20 specimens (mostly different species) was ~14 mm, meaning that the average error between hand-taken and digital measurements was about 5%, well within the range of what is considered a high repeatability for hand-taken measurements by the same observer (informally >90%).

While this demonstrates that 3D models can yield morphometric data, some measurements, like tail length, require landmarks inside the feathers (e.g., the insertion point of the tail feathers into the skin), which are impossible to determine from digital models or photographs. Even basic bill measurements can be challenging for some species because the nares are obscured by feathers. It is important therefore to recognize that digital models will never replace physical specimens as the primary source for biodiversity data. Apart from the difficulty of measuring certain traits, a more general reason for the primacy of the physical specimen is that the scientific uses of specimens continue to expand with continued technological development. 3D models might capture the external features of specimens with amazing resolution, opening doors of access and data collection, but they will entirely miss other important aspects of the Extended Specimen (sensu [5]): DNA, proteins, microstructures, hidden structures, parasites, internal anatomy, and much more.

Some aspects of external morphology are difficult to capture with 3D photogrammetry. Complex textures such as shaggy barbules or velvety feathers may result in fidelity loss and a tendency to clump or merge. During photo capture, the movement of any loose or delicate features may lead to interrupted alignment downstream. We found that contour feathers were well captured by the detailed texture maps, including more detail in the feather barbs than the human eye is capable of focusing on at close range. However, the modeling software often struggled to isolate thin, flat, or complex structures, such as the tail, protruding single feathers, and the toes. We discuss potential improvements to the pipeline below.

Comparison to other existing 3D photogrammetry methods

A few studies have employed 3D photogrammetry for particular questions and taxa [30, 31, 38] and it has found increasing use in paleontology [39, 40], but as far as we know there is only one other attempt at a method for mass digitization of natural history specimens using 3D photogrammetry. Nguyen et al. [27] introduced custom-built photogrammetry hardware for the 3D imaging of insect specimens. Compared to the method they outline, which has likely been modified since then, our method does not use custom-built hardware, and therefore requires about half the initial investment cost (Table 2). The Smithsonian Institution Digitization Program Office offers a gallery of publicly available 3D artifact scans, and provides a benchmark standard for sorting 3D data in a mass-digitization context.

Automation and the importance of procedural mesh processing

A key to 3D digitization’s practical use in large collections is the minimization of staff time. Automating the camera setup reduces staff time for the physical photography, but it is the procedural mesh processing in our software pipeline that may offer the biggest advance in automation. Mesh processing comes after point-cloud generation and includes removing extraneous detail, optimizing model topology, and avoiding data loss during decimation. In a comparative study of 3D scanning techniques, mesh processing averaged 40 minutes of staff time per scan [17]. Mesh processing is known to represent a serious bottleneck in the digitization pipeline and represents the majority of required 3D graphics experience and training [34, 35].

To address this problem, we implemented a procedural, node-based mesh-processing tool in SideFX Houdini, which not only cut down on staff time, but also on training time and required incoming expertise. Adopting Houdini’s tools reduced our staff time to 5–10 minutes per scan with minimal to no oversight. Since it follows a procedure of automated batch ‘nodes’, an understanding of the node-graph is all that is needed to access the pipeline. While 3D graphics experience is helpful when modifying the pipeline, it is not necessary to run the pipeline, or even to troubleshoot it. An automated procedure has other advantages. When using multiple software packages, the pipeline’s accessibility can be threatened by inconsistent updates, changes in licensing, and other logistical issues. Consolidating small processes into a larger, procedural program like Houdini improves compatibility and access to earlier builds of the pipeline.

Automated mesh processing techniques are still largely considered experimental, often relegated to being ‘add ons’ to more general-use software and can be cumbersome, especially when working between software packages. Bot and Irschick [34] point out that Agisoft’s automatic UVs often come out fragmented, without proper seams, and can be difficult to work with. We noted the same issue when using Reality Capture. They also found that uniform mesh triangulation without accounting for quality tolerance or edge alignment can also be problematic. We found similar problems with decimation when applied uniformly across a model, even in dedicated 3D packages such as Blender and Autodesk Maya. Similarly, Veneziano et al. [36] found that uniform decimation can lead to a “massive loss of information,” and that mesh processing should account for protecting areas of interest, especially those with articulated or thin surfaces, before decimation occurs.

We addressed some of these issues with procedural node-based mesh processing in our pipeline. For example, using SideFX Houdini, we could ‘paint on’ nodes prior to processing that protected thin or complex areas from decimation, while optimizing flat surfaces that did not need polygon-dense detail to be accurate. The resulting heatmap helped optimize UV generation and inform other steps in the procedure. Procedural modeling can also bridge software programs that would otherwise take time and training to navigate individually. An example from our pipeline includes combining Instant Meshes, a useful mesh-processing tool, with a custom Taubin smoothing surface operator written in OpenCL and implemented visually as a node in Houdini’s node-graph interface [36]. Procedural mesh-processing addresses a largely overlooked stage of the 3D digitization process, one that will only increase in importance as 3D models become more public-facing and shared via web platforms.

Other future improvements to the pipeline

Another area of potential improvement is hardware efficiency. Future implementations could take advantage of photogrammetry’s unique modularity, compared to other methods, where different lenses and cameras can be swapped depending on the size of the specimen and its surface detail. For bird specimens, a qualitative assessment of bird diversity suggested that approximately 90% of living birds could be digitized with no or only minor modifications to the set-up outlined here. A three-camera setup would reduce the image capture time from 20 minutes to 7 minutes. For improving model quality, a stationary specimen with rotating cameras could potentially allow for higher fidelity scans through improved software processing and reduced noise in the photos, at the cost of increasing the physical footprint of the hardware set-up. Certain software features could be modified to improve quality or efficiency. For example, the quality setting can be increased when building a point cloud during alignment, but at the cost of processing time. We are working on further automating the pipeline with code that removes as much manual oversight as possible (see Github link in Methods for latest protocol updates). Finally, we are also working on a version using open-source alternatives to proprietary software (Table 3).

With a processing time of 1–2 hours per specimen and minimal manual oversight, we estimate that one worker could complete 4 specimens on one computer during an average workday, amounting to about 1,000 specimens digitized by one person on one computer in one year. Given that entire stages of the computational processing pipeline can be batched during idle periods, such as between workdays, these productivity estimates are highly conservative. The throughput rate can be effectively multiplied based on the number of simultaneous workflows [28], with more cameras and PCs. It has been shown that the integration of photogrammetry with other scanning methods can help cover the weaknesses of individual methods [23, 27]. Photogrammetry is recommended for its cost-effectiveness, ease of implementation, and modularity [41].

Public access

Web-accessible, publicly viewable 3D collections are a primary goal of future digitization efforts [42, 43]. Existing 3D platforms like Sketchfab and the Smithsonian Institution Digitization Program demonstrate the advantages and challenges of hosting 3D models online. Web-hosted 3D models that are not only accurate but also web-accessible poses their own set of challenges, as 3D models must be optimized for space efficiency and proper rendering using public viewing tools. Due to its advantages in color-texture fidelity, photogrammetry has unique value here. But its advantages are only useful once staff time has been invested to process the model. File size, topology, UV coordinates, and texture maps all must be properly managed when web-hosting 3D models. This once again highlights the importance of procedural mesh processing we describe above.

Future directions: Color analysis

Currently, large-scale color data acquisition from museum specimens employs either spectrophotometry [44], 2D images [45, 46], or scans of illustrations [47]. All of these methods have limitations. Spectrophotometry assesses color via point sampling, and therefore does not allow for reasonable whole-organism color analysis. Despite artists’ best efforts, illustrations cannot always accurately portray color and cannot capture hyperspectral properties like ultraviolet reflectance. And 2D image analysis, while probably the best available method, is still limited by the number of specimen rotations used (usually three: front, back, and side) and might introduce error in the flattening process [48].

Color analysis is therefore one of the most promising immediate future directions for 3D photogrammetry, especially because methods like CT scanning, laser scanning, and structured light scanning are yet to be optimized for color capture. Though color options are available for laser and structured light scanning, they are limited by their hardware and do not match the resolution, sharpness, and detail provided by a mid-range camera sensor [40, 17, 41, 49]. While we did not specifically integrate color analysis into the pipeline, several open-source programs (like ImageJ; [50]) allow for detailed color analysis from flattened images created from 3D models (Fig 4). While most cameras filter out ultraviolet light—an important visual channel for many birds [51]—camera sensors can be modified to detect the ultraviolet range. Even certain reflectance properties like iridescence (i.e., changing color based on angle of light incidence) can be added during manual mesh processing and texture generation and visualized using real-time rendering engines, like those in Sketchfab and Morphosource [37].

thumbnail
Fig 4. Sample color analysis of 3D models using a guadalupe house finch (Haemorhous mexicanus amplus) specimen, MLZ:Bird:65299.

A 3D model (top) is broken into flattened components (bottom left) and each pixel visualized as a cloud of points in color space with ImageJ.

https://doi.org/10.1371/journal.pone.0236417.g004

This 3D photogrammetry pipeline is therefore a step toward a much-needed and more comprehensive method of color analysis based on continuous, whole-organism, full-spectrum color [52]. Combined with large-scale genomic data [53, 54] and complete phylogenies for various organismal groups [55, 56], color data from 3D digital models will help elucidate links between genotype and phenotype. Considering these links with other extractable phenomic data will open the door to new insights into their ecology, evolution, and functional morphology.

Conclusions

3D photogrammetry is a promising method for capturing the external appearance of natural history specimens. It has been little used in natural history collections because no existing pipelines have proven efficient, cost-effective, and easy to set up. By introducing this pipeline for 3D photogrammetry, we hope to catalyze increased 3D digitization of the external features of specimens, which can complement 3D models of internal anatomy from CT scanning. The resulting phenomic data, collated across museums, will complement genomic data, opening new doors to the study of organismal ecology and evolution and the link between genotype and phenotype.

Supporting information

S1 Table. A list of the holotypes in each cloud and their relevant information.

https://doi.org/10.1371/journal.pone.0236417.s001

(XLSX)

Acknowledgments

We thank Chris Gilman, Jacob Sargeant, and David Dellinger of Occidental College’s Library/Center for Digital Liberal Arts for help and encouragement.

References

  1. 1. Lister A.M., Group, C.C.R., 2011. Natural history collections as sources of long-term datasets. Trends in ecology & evolution 26, 153–154.
  2. 2. Lacey E.A., Hammond T.T., Walsh R.E., Bell K.C., Edwards S.V., Ellwood E.R., et al., 2017. Climate change, collections and the classroom: using big data to tackle big problems. Evolution: Education and Outreach 10, 2.
  3. 3. Denney D.A., Anderson J.T., 2019. Natural history collections document biological responses to climate change. Global change biology.
  4. 4. Watanabe M.E., 2019. The Evolution of Natural History Collections: New research tools move specimens, data to center stage. BioScience 69, 163–169.
  5. 5. Webster M.S., 2017. The Extended Specimen: Emerging Frontiers in Collections-Based Ornithological Research. CRC Press.
  6. 6. Page L.M., MacFadden B.J., Fortes J.A., Soltis P.S., Riccardi G., 2015. Digitization of biodiversity collections reveals biggest data on biodiversity. BioScience 65, 841–842.
  7. 7. Hedrick B., Heberling M., Meineke E., Turner K., Grassa C., Park D., et al., 2019. Digitization and the future of natural history collections. PeerJ Preprints 7, e27859v27851.
  8. 8. Lendemer J., Thiers B., Monfils A.K., Zaspel J., Ellwood E.R., Bentley A., et al., 2019. The Extended Specimen Network: A Strategy to Enhance US Biodiversity Collections, Promote Research and Education. BioScience in press.
  9. 9. Weeks B.C., Willard D.E., Zimova M., Ellis A.A., Witynski M.L., Hennen M., et al., 2019. Shared morphological consequences of global warming in North American migratory birds. Ecology Letters in press.
  10. 10. Holmes M.W., Hammond T.T., Wogan G.O., Walsh R.E., LaBarbera K., Wommack E.A., et al., 2016. Natural history collections as windows on evolutionary processes. Molecular Ecology 25, 864–881. pmid:26757135
  11. 11. Cooney C.R., Bright J.A., Capp E.J., Chira A.M., Hughes E.C., Moody C.J., et al, 2017. Mega-evolutionary dynamics of the adaptive radiation of birds. Nature 542, 344. pmid:28146475
  12. 12. Zhang J., Cong Q., Shen J., Opler P.A., Grishin N.V., 2019. Genomics of a complete butterfly continent. bioRxiv, 829887.
  13. 13. Beaman R.S., Cellinese N., 2012. Mass digitization of scientific collections: New opportunities to transform the use of biological specimens and underwrite biodiversity science. ZooKeys, 7–17.
  14. 14. Blagoderov V., Kitching I.J., Livermore L., Simonsen T.J., Smith V.S., 2012. No specimen left behind: industrial scale digitization of natural history collections. ZooKeys, 133.
  15. 15. Hsiang A.Y., Nelson K., Elder L.E., Sibert E.C., Kahanamoku S.S., Burke J.E., et al., 2018. AutoMorph: Accelerating morphometrics with automated 2D and 3D image processing and shape extraction. Methods in Ecology and Evolution 9, 605–612.
  16. 16. Falkingham P.L., Bates K.T., Avanzini M., Bennett M., Bordy E.M., Breithaupt B.H., et al, 2018. A standard protocol for documenting modern and fossil ichnological data. Palaeontology 61, 469–480.
  17. 17. Mathys A., Brecko J., Semal P., 2013. Comparing 3D digitizing technologies: what are the differences?, 2013 Digital Heritage International Congress (DigitalHeritage). IEEE, pp. 201–204.
  18. 18. Davies T.G., Rahman I.A., Lautenschlager S., Cunningham J.A., Asher R.J., Barrett P.M., et al, 2017. Open data and digital morphology. Proceedings of the Royal Society B: Biological Sciences 284, 20170194. pmid:28404779
  19. 19. Seitz S.M., Curless B., Diebel J., Scharstein D., Szeliski R., 2006. A comparison and evaluation of multi-view stereo reconstruction algorithms. 2006 IEEE computer society conference on computer vision and pattern recognition 1, 519–528.
  20. 20. Bitelli G., Dubbini M., Zanutta A., 2004. Terrestrial laser scanning and digital photogrammetry techniques to monitor landslide bodies. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 35, 246–251.
  21. 21. Westoby M.J., Brasington J., Glasser N.F., Hambrey M.J., Reynolds J.M., 2012. ‘Structure-from-Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 179, 300–314.
  22. 22. Pavlidis G., Koutsoudis A., Arnaoutoglou F., Tsioukas V., Chamzas C., 2007. Methods for 3D digitization of cultural heritage. Journal of cultural heritage 8, 93–98.
  23. 23. Yastikli N., 2007. Documentation of cultural heritage using digital photogrammetry and laser scanning. Journal of Cultural Heritage 8, 423–427.
  24. 24. Mallison H., Wings O., 2014. Photogrammetry in paleontology–a practical guide. Journal of Paleontological Techniques.
  25. 25. Atsushi K., Sueyasu H., Funayama Y., Maekawa T., 2011. System for reconstruction of three-dimensional micro objects from multiple photographic images. Computer-Aided Design 43, 1045–1055.
  26. 26. Galantucci, L.M., Guerra, M.G., Lavecchia, F., 2018. Photogrammetry Applied to Small and Micro Scaled Objects: A Review. International Conference on the Industry 4.0 model for Advanced Manufacturing. Springer, pp. 57–77.
  27. 27. Nguyen C.V., Lovell D.R., Adcock M., La Salle J., 2014. Capturing natural-colour 3D models of insects for species discovery and diagnostics. PLoS One 9, e94346. pmid:24759838
  28. 28. Ströbel B., Schmelzle S., Blüthgen N., Heethoff M., 2018. An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging. ZooKeys, 1–27.
  29. 29. Qian J., Dang S., Wang Z., Zhou X., Dan D., Yao B., et al., 2019. Large-scale 3D imaging of insects with natural color. Optics Express 27, 4845–4857. pmid:30876094
  30. 30. Bright J.A., Marugán-Lobón J., Rayfield E.J., Cobb S.N., 2019. The multifactorial nature of beak and skull shape evolution in parrots and cockatoos (Psittaciformes). BMC Evolutionary Biology 19, 104. pmid:31101003
  31. 31. Giacomini G., Scaravelli D., Herrel A., Veneziano A., Russo D., Brown R.P., et al., 2019. 3D Photogrammetry of Bat Skulls: Perspectives for Macro-evolutionary Analyses. Evolutionary Biology, 1–11. pmid:30606099
  32. 32. Postma M., Tordiffe A.S.W., Hofmeyr M., Reisinger R.R., Bester L.C., Buss P.E., et al., 2015. Terrestrial mammal three‐dimensional photogrammetry: multispecies mass estimation. Ecosphere 6, 1–16.
  33. 33. Romano M., Manucci F., Palombo M.R., 2019. The smallest of the largest: new volumetric body mass estimate and in-vivo restoration of the dwarf elephant Palaeoloxodon ex gr. P. falconeri from Spinagallo Cave (Sicily). Historical Biology, 1–14.
  34. 34. Bot, J.A., Irschick, D.J., 2019. Using 3D Photogrammetry to Create Open-Access Models of Live Animals: 2D and 3D Software Solutions. in the Academic Library: Emerging Practices and Trends, 54.
  35. 35. Reljić I., Dunđer I., Seljan S., 2019. Photogrammetric 3D Scanning of Physical Objects: Tools and Workflow. TEM Journal 8, 383–388.
  36. 36. Veneziano A., Landi F., Profico A., 2018. Surface smoothing, decimation, and their effects on 3D biological specimens. American journal of physical anthropology 166, 473–480. pmid:29446075
  37. 37. Boyer D.M., Gunnell G.F., Kaufman S., McGeary T.M., 2016. Morphosource: Archiving and sharing 3-d digital specimen data. The Paleontological Society Papers 22, 157–181.
  38. 38. Ferreira Amado T., Moreno Pinto M.G., Olalla‐Tárraga M.Á., 2019. Anuran 3D models reveal the relationship between surface area‐to‐volume ratio and climate. Journal of Biogeography 46, 1429–1437.
  39. 39. Mallison H., 2010. The digital Plateosaurus II: an assessment of the range of motion of the limbs and vertebral column and of previous reconstructions using a digital skeletal mount. Acta Palaeontologica Polonica 55, 433–458.
  40. 40. Falkingham P.L., 2012. Acquisition of high resolution three-dimensional models using free, open-source, photogrammetric software. Palaeontologia electronica 15, 15.
  41. 41. Hassan A.T., Fritsch D., 2019. Integration of Laser Scanning and Photogrammetry in 3D/4D Cultural Heritage Preservation–A Review. International Journal of Applied 9, 76–91.
  42. 42. Berquist R.M., Gledhill K.M., Peterson M.W., Doan A.H., Baxter G.T., Yopak K.E., et al, 2012. The Digital Fish Library: using MRI to digitize, database, and document the morphological diversity of fish. PLoS One 7.
  43. 43. Galeazzi F., Callieri M., Dellepiane M., Charno M., Richards J., Scopigno R., 2016. Web-based visualization for 3D data in archaeology: The ADS 3D viewer. Journal of Archaeological Science: Reports 9, 1–11.
  44. 44. Burns K.J., Shultz A.J., 2012. Widespread cryptic dichromatism and ultraviolet reflectance in the largest radiation of Neotropical songbirds: implications of accounting for avian vision in the study of plumage evolution. The Auk 129, 211–221.
  45. 45. Cooney C.R., Varley Z.K., Nouri L.O., Moody C.J., Jardine M.D., Thomas G.H., 2019. Sexual selection predicts the rate and direction of colour divergence in a large avian radiation. Nature Communications 10, 1773. pmid:30992444
  46. 46. Merwin J.T., Seeholzer G.F., Smith B.T., 2020. Macroevolutionary bursts and constraints generate a rainbow in a clade of tropical birds. BMC Evolutionary Biology 20, 1–19. pmid:31906845
  47. 47. Dale J., Dey C.J., Delhey K., Kempenaers B., Valcu M., 2015. The effects of life history and sexual selection on male and female plumage colouration. Nature 527, 367–370. pmid:26536112
  48. 48. Andrea C., Chiappelli M., 2019. How flat can a horse be? Exploring 2D approximations of 3D crania in equids. BioRxiv, 772624.
  49. 49. Caine M., Magen M., 2017. Low cost heritage imaging techniques compared. Electronic Visualisation and the Arts, 430–437.
  50. 50. Abràmoff M.D., Magalhães P.J., Ram S.J., 2004. Image processing with ImageJ. Biophotonics international 11, 36–42.
  51. 51. Cuthill I.C., Partridge J.C., Bennett A.T., Church S.C., Hart N.S., Hunt S., 2000. Ultraviolet vision in birds. Advances in the Study of Behavior. Elsevier, pp. 159–214.
  52. 52. Funk E.R., Taylor S.A., 2019. High-throughput sequencing is revealing genetic associations with avian plumage color. The Auk 136, ukz048.
  53. 53. Genome 10K Community of Scientists, 2009. Genome 10K: a proposal to obtain whole-genome sequence for 10 000 vertebrate species. Journal of Heredity 100, 659–674. pmid:19892720
  54. 54. OBrien S.J., Haussler D., Ryder O., 2014. The birds of Genome10K. GigaScience 3, 32. pmid:25685332
  55. 55. Jetz W., Thomas G., Joy J., Hartmann K., Mooers A., 2012. The global diversity of birds in space and time. Nature 491, 444–448. pmid:23123857
  56. 56. Upham N.S., Esselstyn J.A., Jetz W., 2019. Inferring the mammal tree: Species-level sets of phylogenies for questions in ecology, evolution, and conservation. PLoS Biology 17.