September GRIME-AI Update: NSF Grant Alert!

🚨 Updates 🚨

  • NEW NSF GRANT: Innovative Resources: Cyberinfrastructure and community to leverage ground-based imagery in ecohydrological studies
  • We’re looking forward to starting this in January and sharing GRIME-AI, workflows and data products with the user community! 🎉
  • Details at https://www.nsf.gov/awardsearch/showAward?AWD_ID=2411065

Featured Resource (article, database, etc.)

PhD student John Stranzl has been digging into The Color of Rivers (Gardner et al. 2021). This work is based on satellite remote sensing but is interesting to read with ground-based cameras in mind. The list of citing literature is also worth a look.

Gardner, J. R., Yang, X., Topp, S. N., Ross, M. R. V., Altenau, E. H., & Pavelsky, T. M. (2021). The Color of Rivers. Geophysical Research Letters, 48(1), e2020GL088946. https://doi.org/10.1029/2020GL088946

Featured Photo Information

Silhouettes of three scientists and a trusty Platte Basin Timelapse camera on a tea-colored Sandhills stream.

Credit: Troy Gilmore

Upcoming Events

Conference Sessions focused on image-based research

  • AWRA, UCOWR, NIWR Joint Conference, St. Louis, MO – Sept 30 through Oct 2, 2024
  • Invited lightning talks and panel at EPSCoR National Conference, Omaha, NE – Oct 13-16, 2024 (co-convener Mary Harner); afternoon of Oct 15, after keynote by Platte Basin Timelapse co-founder Mike Forsberg
  • Poster Session at AGU Annual Meeting, Washington, DC – Dec 9-13, 2024 (co-conveners Erfan Goharian, François Birgand, and Chris Terry)

Thanks for viewing!

April 2024 GRIME Software Fans Update

Updates

  • We hope you’ll consider submitting an abstract to the session entitled “Using ground-based time-lapse imagery in ecohydrological studies: Data, software, and applications” at the AWRA/UCOWR/NIWR conference (https://awra.org/Members/Events_and_Education/Events/2024-Joint-Conference/2024_Joint_Abstracts.aspx; Topical Session Code = G). All are welcome! There is also an AI in Watershed Analysis session.
  • GRIME-AI features continue to expand. As part of the image triage (data cleaning) step, we now calculate and store image rotation (camera movement) information for each image.
  • We have had several new GRIME2 releases as we work with a group that is testing octagon targets at their river monitoring sites.

Feature Photo Information

The attached figures are composite images composed of pixel columns from 800 time-lapse images captured midday at an urban pond during 2020-2023. One composite was created using the center pixels from each image, showing ice, algae and vegetation. The other composite shows only vegetation from the far-right pixel columns of each image. Camera movement can easily be detected with these visualizations. Original images courtesy of Aaron Mittelstet and Platte Basin Timelapse. Visualization concept inspired in part by Andrew Richardson (PhenoCam).

Thanks for viewing!

Composite images showing seasonal patterns in vegetation, water quality, and snow cover.

December 2023 GRIME Software Fans Update

Featured Photo

Photo credit: Mary Harner and Troy Gilmore, using a Platte Basin Timelapse (PBT) style camera on the South Branch Middle Loup River near Whitman, NE.

Updates

  • Congratulations to GRIME Lab team member Ken Chapman, who defended his  dissertation and will graduate this month. Great job, Ken!
  • GRIME-related Proposals: two full proposals and a preproposal that involved GRIME software were submitted in November and December.
  • Check out the latest updates on our blog and let us know if/how we can support you project.

Software Information

What is GRIME?

GRIME (GaugeCam Remote Imagery Manager Educational) is open-source commercial-friendly software (Apache 2.0) that enables ecohydrological research using ground-based time-lapse imagery. The first GRIME software for measuring water level with cameras was developed in François Birgand’s lab at North Carolina State University.

What are GRIME2 and GRIME-AI?

GRIME2 and GRIME-AI are the two desktop applications developed by Ken Chapman and John Stranzl, respectively.

Who is involved in the GRIME Lab?

See the growing list at the bottom of our home page: https://gaugecam.org/

We collaborate closely with Mary Harner at University of Nebraska at Kearney: https://witnessingwatersheds.com/

GRIME-AI: A Quick Check on Time Required for Accessing and Processing USGS HIVIS Imagery

This video shows steps and time required for data download and image analysis of over 5,000 images from a USGS HIVIS site on the Elkhorn River in Nebraska. The process includes setting regions of interest (ROIs) and extraction of color and other scalar image features suitable for machine learning applications. This work was done on a laptop computer running GRIME-AI v0.0.3.8c-003.

PROCESSES COMPLETED:

• Data selection

• Imagery download

• Stage and discharge data download

• Image processing

• Image feature dataset created

• Ready for data fusion, then ML modeling

LAPTOP SPECIFICATIONS:

Intel i7-9850H @ 2.60GHz 2.59GHz

32 GB RAM

NVIDIA GeForce GTX 1650

Home fiber internet connection over Wifi

TIME REQUIRED:

The overall process took 1:04 hours, including all download and processing time. Extrapolating, this suggests about 4:15 hours required to download and process one year’s worth of imagery when working in my home office.

GRIME-AI Open-Source Software for Analysis of Ground-based TIme-lapse Imagery for Ecohydrological Science

GRIME-AI v0.0.3.7 Camera Trap Software for Ecohydrology: Current and Future Features

GRIME-AI v0.0.3.7 image processing screen.

This post builds on our recent update about GRIME-AI capabilities. The previous post (and video) described features in GRIME-AI that are reasonably stable (although subject to occasional changes in APIs for public data sources). The description below is our current roadmap to a full GRIME-AI suite of tools for using imagery in ecohydrological studies. Please contact us if you see major gaps or are interested in helping us test the software as new features are developed!

The following features are implemented or planned for GRIME-AI:

You will notice asterisks that indicate *planned future functionality (timeframe = months to years) and **functionality under development (timeframe = weeks to months). All other features are developed, but subject to additional user testing as we work toward a stable public release. GRIME-AI is being developed open source commercial friendly (Apache 2.0).

  • Acquire PhenoCam imagery and paired NEON sensor data
  • Acquire USGS HIVIS imagery and paired stage, discharge and other sensor data
  • Data cleaning (image triage)
    • Automatically identify and remove low-information imagery
  • Data fusion*
    • Identify gaps in image and other sensor data*
    • Documented resolution of data gaps*
    • Documented data alignment criteria*
      • Choose precision for “paired” timestamps (e.g., +/- 5 min between image timestamp and other sensor data timestamp)*
  • Image analysis
    • Calculate and export scalar features for ML with low computational requirements
      • Image analysis algorithms include:
        • K-means color clustering (user selected, up to 8 clusters, HSV for each cluster)
        • Greenness index (PhenoCam approach)
        • Shannon Entropy
        • Intensity
        • Texture
    • Draw masks for training segmentation models**
      • Draw polygon shapes
      • Save masks and overlay images**
      • Export mask**
    • Image calibration and deterministic water level detection (currently a separate Windows installer called GRIME2, but we have command line to implement this in GRIME-AI)**
      • Draw calibration ROI for automatic detection of octagon calibration targets
      • Draw edge detection ROI for automatic detection of water edge
      • Enter reference water level and octagon facet length
      • Process image folders
        • Save overlay images
    • All scalar feature values, ROIs and polygon shapes exported as .csv and .json*
  • Data products and export*
    • Documentation of data source and user decisions, where final datasets include:
      • Metadata for all source data*
      • Documented user decisions from data cleaning and data fusion processes*
      • Documentation of calculated image features*
        • Sample image overlay showing location of ROIs*
        • Sample image showing segmentation masks and labels*
        • Coordinates and labels of all ROIs (.csv and .json)*
        • Breakpoints for major camera movement, image stabilization, or other major changes in imagery*
      • A .csv and a .json file with aligned, tidy data that is appropriate for training/testing ML models*
      • Metadata appropriate for storing final data product (scalar data only) in CUAHSI HydroShare or similar data repository*
      • Documentation of imagery source, including timestamps and metadata for original imagery retrieved from public archive*
  • Modeling and model outputs*
    • Build segmentation models (e.g., to automatically detect water surfaces in images)*
    • Build ML models from scalar image features and other sensor data and/or segmentation results*
    • Export model results, performance metrics, and metadata*

All of the above are being developed under the MESH development philosophy:

GRIME-AI: Software for camera trap hydrology

John Stranzl has been continuously adding features to GRIME-AI, which is open-source software for acquiring data and processing imagery from ground-based cameras.

Here’s a quick update on the current capabilities of GRIME-A v0.0.3.3. This video features:

  • Downloader for PhenoCam imagery and other data at NEON sites
  • Downloader for USGS Imagery and paired stage and discharge data
  • File and data types downloaded to local drive