GRIME-AI v0.0.3.7 Camera Trap Software for Ecohydrology: Current and Future Features

GRIME-AI v0.0.3.7 image processing screen.

This post builds on our recent update about GRIME-AI capabilities. The previous post (and video) described features in GRIME-AI that are reasonably stable (although subject to occasional changes in APIs for public data sources). The description below is our current roadmap to a full GRIME-AI suite of tools for using imagery in ecohydrological studies. Please contact us if you see major gaps or are interested in helping us test the software as new features are developed!

The following features are implemented or planned for GRIME-AI:

You will notice asterisks that indicate *planned future functionality (timeframe = months to years) and **functionality under development (timeframe = weeks to months). All other features are developed, but subject to additional user testing as we work toward a stable public release. GRIME-AI is being developed open source commercial friendly (Apache 2.0).

  • Acquire PhenoCam imagery and paired NEON sensor data
  • Acquire USGS HIVIS imagery and paired stage, discharge and other sensor data
  • Data cleaning (image triage)
    • Automatically identify and remove low-information imagery
  • Data fusion*
    • Identify gaps in image and other sensor data*
    • Documented resolution of data gaps*
    • Documented data alignment criteria*
      • Choose precision for “paired” timestamps (e.g., +/- 5 min between image timestamp and other sensor data timestamp)*
  • Image analysis
    • Calculate and export scalar features for ML with low computational requirements
      • Image analysis algorithms include:
        • K-means color clustering (user selected, up to 8 clusters, HSV for each cluster)
        • Greenness index (PhenoCam approach)
        • Shannon Entropy
        • Intensity
        • Texture
    • Draw masks for training segmentation models**
      • Draw polygon shapes
      • Save masks and overlay images**
      • Export mask**
    • Image calibration and deterministic water level detection (currently a separate Windows installer called GRIME2, but we have command line to implement this in GRIME-AI)**
      • Draw calibration ROI for automatic detection of octagon calibration targets
      • Draw edge detection ROI for automatic detection of water edge
      • Enter reference water level and octagon facet length
      • Process image folders
        • Save overlay images
    • All scalar feature values, ROIs and polygon shapes exported as .csv and .json*
  • Data products and export*
    • Documentation of data source and user decisions, where final datasets include:
      • Metadata for all source data*
      • Documented user decisions from data cleaning and data fusion processes*
      • Documentation of calculated image features*
        • Sample image overlay showing location of ROIs*
        • Sample image showing segmentation masks and labels*
        • Coordinates and labels of all ROIs (.csv and .json)*
        • Breakpoints for major camera movement, image stabilization, or other major changes in imagery*
      • A .csv and a .json file with aligned, tidy data that is appropriate for training/testing ML models*
      • Metadata appropriate for storing final data product (scalar data only) in CUAHSI HydroShare or similar data repository*
      • Documentation of imagery source, including timestamps and metadata for original imagery retrieved from public archive*
  • Modeling and model outputs*
    • Build segmentation models (e.g., to automatically detect water surfaces in images)*
    • Build ML models from scalar image features and other sensor data and/or segmentation results*
    • Export model results, performance metrics, and metadata*

All of the above are being developed under the MESH development philosophy:

GRIME-AI: Software for camera trap hydrology

John Stranzl has been continuously adding features to GRIME-AI, which is open-source software for acquiring data and processing imagery from ground-based cameras.

Here’s a quick update on the current capabilities of GRIME-A v0.0.3.3. This video features:

  • Downloader for PhenoCam imagery and other data at NEON sites
  • Downloader for USGS Imagery and paired stage and discharge data
  • File and data types downloaded to local drive

ITESM Capstone Collaboration: WaterFront Software

Hydrologists are used to jumping through hoops to access data. But it doesn’t have to be that way all the time! In a single semester, the stellar Tecnológico de Monterrey (ITESM) WaterFront Project Team developed software that will easily display monthly summaries of streamflow at multiple USGS stream gages. As a bonus, we can quickly view flow duration curves for the same gages.

Thanks to the ITESM team’s expertise and hard work, this software will be used to generate Extension hydrology reports for the Platte River in Nebraska. Platte River streamflows are critical for agricultural production and for important wildlife habitat in Nebraska.

The UNL GaugeCam Team, along with Doug Hallum at the West Central Research, Education and Extension Center, presented this challenge to the capstone student group in the Departamento de Computación and Escuela de Ingeniería y Ciencias. ITESM students, led by Professor Elizabeth López Ramos, tackled this project in two phases.

PHASE 1: Gather list of client requirements and develop proposal.

  • Requirement gathering and analysis based on meetings with client.
  • Research and brainstorm to devise innovative and practical solutions.
  • Create a concise proposal outlining solutions, timeline, budget, and benefits for the client.
  • Present refined proposal to the client, highlighting alignment with requirements and receiving feedback.

PHASE 2: Build out the software based on the accepted proposal.

  • Construct platform, including features and functionality described in the proposal.
  • Test software functionality, performance, and security to meet client requirements and industry standards.
  • Present the software to the client for feedback and iterate to meet expectations and requirements.
  • Deliver a functional Windows installer and documentation to the client.

Just like real world situations, we (UNL) came into this project with many ideas for the team. In other words, we gave them the very real challenge of (1) setting realistic expectations for the client, and (2) helping the client know what they actually want for a final product. And what we got was a streamlined, professional Windows application and good documentation.

Check out the animation below to see some of the WaterFront features.

  • Select a range of months to summarize
  • Display monthly statistics by hovering mouse
  • Display additional USGS gage sites
  • Show flow-duration curves
  • Download graphics and data

We are truly grateful to the ITESM WaterFront Team for their dedication to this project.

  • Joel Fernando Santillán Santana
  • Jorge Luis Salgado Ledezma
  • Milton Eduardo Barroso Ramírez
  • Miriam Paulina Palomera Curiel
  • José Ricardo Vanegas Castillo

Special thanks also to Professor Elizabeth López Ramos and Professor Gildardo Sánchez Ante for a wonderful experience working with your class. We hope we can continue working together!

ITESM Capstone Collaboration: KOLA Data Portal

In Spring 2023 the GaugeCam team at the University of Nebraska worked with two excellent student groups on their capstone projects in the Departamento de Computación and Escuela de Ingeniería y Ciencias at Tecnológico de Monterrey (ITESM).

The first group we are featuring is the KOLA Data Portal Team. These students did an amazing job creating a web interface for multimodal environmental data! Professor Elizabeth López Ramos was the instructor for this capstone course.

This project was focused on creating a data portal for the Kearney Outdoor Learning Area (KOLA) site that is located next to the Kearney, NE High School. The project was designed to simulate real interaction with clients and included two phases.

PHASE 1: Gather list of client requirements and develop proposal.

  • Requirement gathering and analysis based on meetings with client.
  • Research and brainstorm to devise innovative and practical solutions.
  • Create a concise proposal outlining solutions, timeline, budget, and benefits for the client.
  • Present refined proposal to the client, highlighting alignment with requirements and receiving feedback.

PHASE 2: Build out the data ingestion platform based on the accepted proposal.

  • Construct platform, including features and functionality described in the proposal.
  • Test platform functionality, performance, and security to meet client requirements and industry standards.
  • Present the platform to the client for feedback and iterate to meet expectations and requirements.

A view and description of KOLA can be seen in the screenshot below.

A view of KOLA courtesy of https://outdoorclassne.com.

The key deliverables for the KOLA Portal project included the ability to upload and access several types of sensor data, including sound recordings, imagery, and scalar data (e.g., water levels). We met weekly with the KOLA Team. Students led those meetings, providing important updates and proposing next steps. As described in their final presentation, their solution consisted of the following:

The KOLA Portal has an attractive welcome screen, including a site map showing the various sensors that provide environmental data.

The green rectangle in the screenshot below highlights how we can now navigate from viewing the sensors, to adding a sensor, adding scalar data, and adding multimedia data on the platform.

The portal also allows us to view sample data we provided the team, as shown below.

The KOLA Team also provided excellent documentation of the whole project! This was provided in a summary handoff email at the end of the semester. The video below shows the User Manual for the portal. The team also provided (1) an API reference and (2) a deployment guide that walks the user through the process of setting up the environment, navigating the codebase, and deploying the portal with the Next.js framework and Vercel hosting platform.

Overall, the KOLA Data Portal Team were highly productive and very professional. We are very grateful to Professor Elizabeth López Ramos and Professor Gildardo Sánchez Ante for involving us in the course. We learned a lot in the process and would love to work with other ITESM students in the future!

GaugeCam Octagon Requirements: size of octagon in the image

There are six key components to be concerned about when setting up the GaugeCam octagon calibration target in the field.

  1. The facet lengths of the octagon must be the exact same length.
  2. The background in the stream must be mounted orthogonal to the water surface.
  3. The camera should be mounted as directly in front of the background as possible.
  4. Field measurements must be made from the upper left vertex to the water level.
  5. The size of the octagon (in pixels) in the image must be large enough to be detected by the GRIME2 search algorithm.
  6. The thickness (in pixels) of the black border around the octagon must be large enough to be detected.

As you might guess, based on the bold text above, we are focusing on the last two items in this blog post, which are both related to the size of the octagon in images to achieve successful calibration for water level measurement.

To provide guidelines for the octagon dimensions (in pixels) required for successful calibration, we used imagery from the Kearney Outdoor Learning Area (KOLA). The image used was about 1MB when stored as a .jpg, as captured with a Reconyx trail camera.

In this simple test we resampled the images to reduce resolution. As the resolution was reduced, the size of the octagon was reduced. In other words, there were fewer pixels displayed across the width and facet lengths of the octagon. Similarly, the black border around the octagon became more pixelated.

The animation above shows when the octagon search algorithm began to break down as the resolution of the image is decreased. The decrease is represented by the scale annotation in the image. The scale value shown is the size of the image in percentage of the original image size (2304 x 1295 pixels, including the black strips at top and bottom of the image). Major failures of the octagon find and calibration started to occur at less than 60% of the original image resolution. The horizontal width of the blue area in the octagon target is 137 pixels in the 60% image.

Below is a table showing the desired width of the black border around the octagon. The suggested width for robust detection is 15 pixels, although the 60% image had a width of only about 11 pixels. The width of the black border can be measured in your images using the measurement tool in GRIME2.

Pixel width required for black outline of octagon target.

The major takeaway here is that, whatever the size of the images captured, the current GRIME2 octagon search and calibration algorithm should be robust for images where the blue part of the octagon is at least 137 pixels wide and the width of the black border is at very minimum 11 pixels wide but preferably at least 15 pixels wide. These values are based on a very simple test with an ideal image. Greater sizes (in pixels) of the octagon target are generally going to be preferable, and the overall performance of the system is still highly dependent on image scene quality (fewer shadows, glare, etc.) and proper installation of the target background in the stream.

Ken Chapman defends his GRIME Lab dissertation!

We had a great online and in-person audience for Ken Chapman’s dissertation defense on Thursday May 8, 2023. Ken gave an excellent overview of his dissertation, as shown below. He will graduate in December 2023.

Congratulations Dr. Chapman!

A pdf of Ken’s presentation is available here.

Ken’s three major dissertation projects have resulted in a robust, free open-source water level measurement software and scientific publications.

More information and the GRIME2 software can be found here.

Publications are as follows:
Chapman, K. W., Gilmore, T. E., Chapman, C. D., Birgand, F., Mittlestet, A. R., Harner, M. J., et al. (2022). Technical Note: Open-Source Software for Water-Level Measurement in Images With a Calibration Target. Water Resources Research, 58(8), e2022WR033203. https://doi.org/10.1029/2022WR033203

Chapman, K., Gilmore, T., Mehrubeoglu, M., Chapman C., Mittelstet, A., Stranzl, J.E. (2023). Is there sufficient information in images to fill large data gaps of stage and discharge measurements with machine learning? PLOS Water [revised, in review]

Chapman, K. W., Gilmore, T. E., Harner, M. J., Stranzl, J.E., Chapman, C. D., Birgand, F., Mittlestet, A. R., et al. (2023). Technical Note: Improved Calibration Target for Open-Source, Image-Based Water-Level Measurement. In prep for Water Resources Research.

GaugeCam: 2022 in review

Research:

We monitored water quality in the Nebraska Sandhills, combining water sampling with time-lapse imagery from multiple cameras, including two co-located Platte Basin Timelapse cameras. The USGS 104b program supported this project. The animation below illustrates changing water color with changing dissolved organic carbon (DOC)* measured in water samples.

Imagery from a Reconyx game cam (left) and a Platte Basin Timelapse style camera (upper right) showing water color over the dissolved organic carbon (DOC) monitoring period.
*NOTE: DOC concentrations may have been affected by sample storage in the automated field sampler (ISCO) prior to sample filtration and acidification.

At the Kearney Outdoor Learning Area (KOLA), we installed two different GaugeCam targets as we develop less intrusive methods for image-based water level measurement. This installation, which also includes a traditional water level sensor (HOBO transducer), is the focus of PhD candidate Ken Chapman’s final dissertation chapter. This work is supported in part by the UNL Collaboration Initiative.

Image-based water level measurement at KOLA, with original bowtie calibration target (right) and octagon calibration target (left). The PVC tube on the left contains a HOBO water level sensor (transducer).

In Bazile Creek, we monitored water levels using a camera and GRIME2 software. This work is partially supported by USDA (PR-HPA LTAR Network).

Newly installed stop-sign target on a tributary to Bazile Creek.

Software Development:

GRIME2 (water level) software was published in a peer-reviewed article.

GRIME2 has now been updated to accommodate both a bowtie target (see right-hand side of KOLA image, above) and an octagon calibration target (left-hand side of KOLA image).

Partnerships:

GaugeCam Team Presentations:

T.E. Gilmore, Stranzl, J., Chapman, K., Harner, M. GRIME-AI Open-Source Ecosystem for Time-lapse Imagery. USGS CDI Script-a-thon, October 11, 2022 (virtual, 50+ attendees)

Stranzl, J., T.E. Gilmore, M. Harner, and K. Chapman. 2022. GRIME-AI software for incorporating information from ground-based camera imagery with other sensor data (talk). Joint Aquatic Sciences Meeting, Grand Rapids, Michigan. May 2022.

Harner, M., T. Gilmore, K. Chapman, J. Bajelan, A. Klein, and C. Wagner. An introduction to the Kearney Outdoor Learning Area (KOLA) for experiential learning in ecohydrological research. 2022 Platte River Basin Conference. Kearney, NE. October 2022.

GaugeCam – ITESM Collaboration on Artificial Intelligence for Camera-based Hydrology

The GaugeCam (GRIME Lab) team, including PI Troy Gilmore at University of Nebraska-Lincoln and Mary Harner at the University of Nebraska at Kearney, have been using imagery in eco-hydrologic studies and science communication for about a decade. The University of Nebraska is also home to a large time-lapse image archive from high-resolution (DSLR) cameras deployed across the Platte River Basin. Since 2010, the Platte Basin Timelapse project has acquired and archived over 3 million high quality images of water features across the watershed. These images are captured hourly during daylight hours and contain large amounts of untapped scientific information. Our team is devoted to (1) extracting ecological and hydrological information from imagery, and (2) building software that makes these tasks easy for other scientists.

Over the last 9 months, the GaugeCam GRIME Lab has benefited greatly from a fast-developing collaboration with scientists at Tecnológico de Monterrey (ITESM). Our collaboration involves both teaching and research.

Teaching

Dr. Gildardo Sánchez Ante, Professor in the Department of Computation in the School of Engineering and Sciences at the Guadalajara Campus has incorporated image-based hydrology projects in two courses. The GaugeCam team has been joining these classes via Zoom. We have had the opportunity to introduce the students to the dataset and hydrology concepts. We have also heard updates from the student project teams and are looking forward to final project presentations this week. The students are doing a fantastic job of extracting information from imagery and building machine learning models that successfully predict streamflow in the North Platte River in Nebraska!

The two courses where Platte Basin Timelapse-derived data are being used are:

  • Advanced Artificial Intelligence for Data Science
  • Business Solution Development Capstone project

Research

We see many exciting opportunities for image-based water monitoring, including in Mexico. Based on our background with image-based hydrology projects and our collaboration with Dr. Sánchez Ante, we are also working closely with Dr. Pabel Antonio Cervantes Avilés to set up and pilot camera-based monitoring at a site on the Atoyac River (see video below for background on this river). Dr. Cervantes Avilés is in the Water Science and Technology Group, in the School of Engineering and Sciences at the Puebla Campus.

We are excited to have partners at ITESM with much-needed expertise in artificial intelligence and water quality. We look forward to new insights into water quality and camera-based monitoring approaches in 2023 and beyond.

GaugeCam: development philosophy and trajectory of the GRIME software suite

Since 2010, the GaugeCam team has been working on open-source software for ground-based time-lapse imagery (recently referred to by a colleague as “camera trap hydrology”).

The project started in François Birgand’s lab at North Carolina State University. GaugeCam was my undergraduate research project and Ken Chapman wrote the open-source software to measure water level in highly conditioned time-lapse images. We published a paper in 2013 (Gilmore et al. 2013, Journal of Hydrology) showing potential precision of +/- 3mm, or about the height of a meniscus, in carefully controlled laboratory conditions. Application of GaugeCam showed increased uncertainty in a precisely installed and well-maintained field site in a tidal marsh in eastern NC (Birgand et al. 2022, PLOS Water). Uncertainty is still reasonable compared to other common water level monitoring methods. The current iteration of this mature, open-source software for measuring water level is the GaugeCam Remote Image Manager 2 (GRIME2), as described in a technical note in Water Resources Research (Chapman et al. 2022). More details, including major recent improvements to the ease of installation, are available at https://gaugecam.org/grime2-details/.

GaugeCam water level camera system targets in a stream in Nebraska.
GaugeCam stop-sign calibration target installed next to the original bow-tie target for water level measurement using cameras.

More recently, we have been working with Mary Harner (University of Nebraska at Kearney, see https://witnessingwatersheds.com) and her colleagues with the Platte Basin Timelapse (PBT) project (https://plattebasintimelapse.com). PBT has over 3 million high-resolution (DSLR) hourly daytime images from 60+ cameras across the Platte River basin. These images are available to University of Nebraska researchers for research and teaching purposes. Mary and her colleague Emma Brinley-Buckley had previously published research based on PBT imagery, for example, extracting the extent of wetland inundation from imagery. Similarly, Ken Chapman (now a PhD candidate at UNL) used PBT imagery to fill simulated data gaps in USGS stream gauge data (withdrawn preprint in HESS, now revised and in review in PLOS Water).

Thus, the inspiration for GRIME-AI, a software suite that will allow much broader application of time-lapse (camera trap) imagery in hydrological and ecological studies. In short, GRIME-AI is intended to help us move across the spectrum of potential image types that can be acquired from fixed ground-based cameras. The figure below shows this vision, starting with highly conditioned GaugeCam water level imagery on the left, to more flexible (but more difficult to handle) unconditioned imagery on the right. UNL PhD student John Stranzl is the lead programmer on GRIME-AI. More information is available at https://gaugecam.org/grime-ai-details/.

The image arc, from highly conditioned (left) to unconditioned imagery (right), accommodated by GRIME-AI software. Imagery courtesy of Idaho Power, PhenoCam, and PBT.
Screenshot of GRIME-AI software interface showing color clusters and other scalar image features extracted from an image of a stream in the Nebraska Sandhills.
GRIME-AI v0.0.2.2, for automated data acquisition from NEON sites, image triage, and extraction of scalar image features.

Long-term, our goal is for GRIME-AI is an open-source ecosystem that has the simplicity for new (very low barrier to entry for non-programmers) combined with the opportunity for experienced programmers to contribute to the capabilities of the software. A key guiding concept for GRIME-AI is that it should reduce mundane and repetitive tasks to the bare minimum, while still being powerful and flexible. Our software development philosophy is below.

MESH philosophy for the GRIME software ecosystem

Early features enabled in GRIME-AI are as follows:

  • Automatic download of NEON data
  • Automatic download of PhenoCam images for NEON sites
  • Image triage (removal of dark and/or blurry imagery), with csv report
  • Visualization (overlays) of common edge detection algorithms (Canny, Sobel, ORB, etc.)
  • Custom region of interest (ROI) placement on reference image
  • Calculation of scalar image features for each ROI and whole image, including:
    • Color cluster analysis for each ROI and whole image
    • Greenness calculations for each ROI and whole image
    • Intensity and entropy for each ROI and whole image
  • Generation of csv with all selected scalar image features extracted for folder(s) (recursive) of images

The obvious next steps are:

  • Data fusion (combining other sensor data with scalar image features extracted in GRIME-AI)
  • Anchoring images/ROIs to account for camera movement
  • Capabilities to build classifiers from fused datasets
  • Automated image segmentation

What does it take to install GaugeCam? Here are the gritty details and caveats.

Today I wrote a long email to a colleague interested in setting up GaugeCam. Why not share (a lightly-edited version) with the world?!

First, here’s a recent animation of me installing a target. This might take a moment to load.

animation of image-based water level camera installation
Installation of stop-sign target for GaugeCam water level measurement system.

Second, there are a number of additional resources available at https://gaugecam.org/grime2-details/, including this document that describes the installation details for a bowtie target at our North Carolina tidal marsh site: http://gaugecam.org/wp-content/uploads/2022/06/Gaugecam_org_background_installation_guideline.pdf.

A small photo gallery of a stop-sign target installation in Nebraska:

Some practical considerations and details:

  1. The first question that needs to be answered is the level of reliability and accuracy that is required for your application. If this is an easily-accessed (for maintenance) “demo” site, then this system is perfect for facilitating science communication, etc. If it’s a remote site, with only occasional access for maintenance and data collection, we strongly recommend putting in a cheap transducer alongside the GaugeCam system (e.g., HOBO, $300).
  2. Accuracy depends on (1) how many real-world mm or cm are represented by each pixel in the image, and (2) the quality of installation and maintenance of the background target. The following are issues to consider for field application:
    1. In controlled lab experiments, we can achieve high accuracy (+/- 3 mm, about the size of a meniscus; see Gilmore et al. 2013).
    2. In a carefully maintained tidal marsh installation, accuracy was less, but still quite good (see Birgand et al. 2022).
    3. You will encounter foggy mornings, spider webs on lens, and other similar environmental issues when using cameras. Expect data gaps of minutes to hours due to these issues.
    4. While biofouling is a universal problem for many reasons for many applications and industries, we are actively working to mitigate biofouling affects in our application. In the nutrient-rich agricultural streams where we work, biofouling accumulates within 7-10 days, which requires regular cleaning.
    5. The background must be plumb (perpendicular to the water surface).
    6. The original bow-tie target (template here, nominally 3’ x 4’) was used in the studies above. The new stop-sign target (template here, nominally 2’ x 4’) is experimental, but is smaller and still seems to give pretty good results. The bow-tie requires a survey of the real-world location of bowtie intersections. The stop-sign target requires only the facet length measurement (assumed to be the same for all 8 facets on the printed target) and reference measurement from the bottom left corner.
  3. In terms of installation, here is a parts list from my recent installations in sandy to slightly gravelly streambeds:
    1. Target
      1. Target background, matte print laminated on plexiglass*
      2. Two treated 4×4 posts, 8’ long [NOTE: before digging post holes you should have utilities located; contact your local utilities for this (usually free) service!]
      3. Two treated 2×4 boards, 8’ long
      4. Two Post- Base Spike for 4×4 Nominal Lumber**
      5. One ¾” treated sheet of plywood***
      6. Short (1” or 1 ¼”) pan-head screws (for attaching the plexiglass to the plywood)
      7. Long (3”) outdoor decking screws
      8. Thin wood wedges or spacers (for adjusting background so it is plumb – you might be able to cut these in the field)
    2. Camera
      1. We suggest Reconyx cameras due to their quality, though nearly any game camera will do
      2. Suggest RAM mount products to minimize any camera movement (example 1, example 2)
      3. Suggest adding a lock on camera for security
      4. Suggest treated 4×6 post for mounting the camera; 4×4 post as very minimum.
      5. Camera can be mounted on a large tree or similar, but this will usually create a good bit of movement of the camera. Small amounts of movement can be handled by the software, but minimal movement is better.
    3. Tools that you need (at the very minimum)
      1. Sledge hammer
      2. Post hole digger
      3. Cordless drill
      4. Cordless saw(s) (at least a reciprocating saw)
      5. Level
      6. Measuring tape
      7. Screwdriver

*We are looking for a better alternative that does not require as much cleaning and/or is more resistant to biofouling. The matte finish seems like a good attachment surface for biofouling. If you find a local sign shop for printing, I can send you the contact info for my sign shop so they can talk.

**I have used these in the sandy streams, where I cannot dig holes more than ~1 ft into the streambed (the sand collapses in), so adding these spikes on the bottom helps solidify the installation.

***You can print the background on very thick plexiglass and skip the plywood, but I found this to be expensive. So I printed on ¼” plexiglass and mounted on plywood backing.