September GRIME-AI Update: NSF Grant Alert!

🚨 Updates 🚨

  • NEW NSF GRANT: Innovative Resources: Cyberinfrastructure and community to leverage ground-based imagery in ecohydrological studies
  • We’re looking forward to starting this in January and sharing GRIME-AI, workflows and data products with the user community! 🎉
  • Details at https://www.nsf.gov/awardsearch/showAward?AWD_ID=2411065

Featured Resource (article, database, etc.)

PhD student John Stranzl has been digging into The Color of Rivers (Gardner et al. 2021). This work is based on satellite remote sensing but is interesting to read with ground-based cameras in mind. The list of citing literature is also worth a look.

Gardner, J. R., Yang, X., Topp, S. N., Ross, M. R. V., Altenau, E. H., & Pavelsky, T. M. (2021). The Color of Rivers. Geophysical Research Letters, 48(1), e2020GL088946. https://doi.org/10.1029/2020GL088946

Featured Photo Information

Silhouettes of three scientists and a trusty Platte Basin Timelapse camera on a tea-colored Sandhills stream.

Credit: Troy Gilmore

Upcoming Events

Conference Sessions focused on image-based research

  • AWRA, UCOWR, NIWR Joint Conference, St. Louis, MO – Sept 30 through Oct 2, 2024
  • Invited lightning talks and panel at EPSCoR National Conference, Omaha, NE – Oct 13-16, 2024 (co-convener Mary Harner); afternoon of Oct 15, after keynote by Platte Basin Timelapse co-founder Mike Forsberg
  • Poster Session at AGU Annual Meeting, Washington, DC – Dec 9-13, 2024 (co-conveners Erfan Goharian, François Birgand, and Chris Terry)

Thanks for viewing!

April 2024 GRIME Software Fans Update

Updates

  • We hope you’ll consider submitting an abstract to the session entitled “Using ground-based time-lapse imagery in ecohydrological studies: Data, software, and applications” at the AWRA/UCOWR/NIWR conference (https://awra.org/Members/Events_and_Education/Events/2024-Joint-Conference/2024_Joint_Abstracts.aspx; Topical Session Code = G). All are welcome! There is also an AI in Watershed Analysis session.
  • GRIME-AI features continue to expand. As part of the image triage (data cleaning) step, we now calculate and store image rotation (camera movement) information for each image.
  • We have had several new GRIME2 releases as we work with a group that is testing octagon targets at their river monitoring sites.

Feature Photo Information

The attached figures are composite images composed of pixel columns from 800 time-lapse images captured midday at an urban pond during 2020-2023. One composite was created using the center pixels from each image, showing ice, algae and vegetation. The other composite shows only vegetation from the far-right pixel columns of each image. Camera movement can easily be detected with these visualizations. Original images courtesy of Aaron Mittelstet and Platte Basin Timelapse. Visualization concept inspired in part by Andrew Richardson (PhenoCam).

Thanks for viewing!

Composite images showing seasonal patterns in vegetation, water quality, and snow cover.

New GRIME2 release with CLI generator

This release makes the creation of CLI calls much easier. The ROI’s and other parameters you select in the GUI can be used to create CLI parameters and output them as test to the textbox below the main image. There are two “Create command line” buttons: one on the Calibration tab and one on the Find Line tab.

https://github.com/gaugecam-dev/GRIME2/releases/tag/v0.3.0.8

Screenshot of the GRIME2 GUI showing the Create command line button and output in the Line Find tab.
Create command line button and output in the Line Find tab.

What time-lapse camera should I use for image-based hydrology with machine learning?

Planning ecological and/or hydrological research project using trail cams? If so, you might be wondering about which camera and mounting system to use. We have some ideas. But first, here are some helpful references from groups that have many years of experiences with camera traps and ecohydrological monitoring:

Learn from Andrew Richardson’s account of PhenoCam’s history and lessons learned from operating a large scientific camera network: https://doi.org/10.1016/j.agrformet.2023.109751

Get inspired by Platte Basin Timelapse’ artistic time-lapse camera network, oriented toward conservation storytelling in support of science: https://plattebasintimelapse.com/timelapses/

Explore streams and rivers on the United States Geological Survey’s HIVIS site: https://apps.usgs.gov/hivis/

Honestly, the groups above have more experience installing time-lapse cameras than we do. That said, we have been learning and are happy to share the approach we are now using at stream monitoring sites like the Kearney Outdoor Learning Area (KOLA).

The Camera:

Our current preference is the Reconyx Hyperfire 2 Professional camera.

Why “Professional”? These cameras are $60 more than the standard Hyperfire 2. Based on the Reconyx comparison tool, here are key differences.

Reconyx Hyperfire 2 Professional Camera front view
Reconyx Hyperfire
(source: www.reconyx.com)
  • Greater range of video length options
  • More frame rate options
  • More trigger delay options
  • Motion sensor scheduling
  • More time-lapse intervals and surveillance modes
  • Greater range of ISO and nighttime shutter settings
  • Higher/lower operating temperatures
  • Optional external power connector
  • Option for custom focal system
  • Optional external trigger
  • Software with more options

The Security Enclosure:

A good lock and security enclosure are important for most sites. But we also like the Reconyx security enclosure for another reason: image stability. Minimizing camera movement is one of the most important considerations for effective monitoring! Of course, a security enclosure does not guarantee a perfectly stable camera. But we like the way the enclosure can be mounted in a permanent position and the camera can be removed for maintenance and placed back in the security enclosure without large translational or rotational shifts in the field of view. We have used other cameras and mounting systems where the camera and/or mount has to be loosened or removed when swapping the SD card and/or changing batteries. When we re-attach the camera and/or mount, it’s a guessing game as to whether we’ve returned the camera to a position that captures even a similar field of view.

Reconyx security enclosures
Reconyx security enclosures as seen at https://www.reconyx.com/product/Security-Enclosure.

Other Accessories:

  • We are just now trying this heavy duty swivel mount: https://www.reconyx.com/product/Heavy-Duty-Swivel-Mount. We have heard good things about it and will update when we have more experience.
  • When using lithium AA batteries in a standalone camera, we have long camera run times. We are just getting acquainted with the cellular camera, which obviously requires more power. We have heard good things about Reconyx’s external power supply. It has a nice form factor, but it is pretty simple: a solar panel, charge controller, and replaceable battery. If you’re already great at setting up solar power and/or have the supplies in your lab, maybe you can save a little by doing it yourself. We’ll update here after we have some experience with this power supply: https://www.reconyx.com/product/solar-charger-10-watt.

Things we think you should NOT do:

Pretty please, do not just stick a t-post in the ground and attach a camera. You will get a lot of camera movement and it will make life more difficult when you want to process your images.

Do not just strap a camera on a tree. If you are using a tree and can’t use screws or lag bolts, then securely attach an enclosure (directly, or via swivel mount that is strapped to the tree). If you just strap the camera to a tree and then have to remove the strap and camera each time you swap an SD card and/or batteries, you will get a lot of camera movement and it will make life more difficult when you want to process your images.

In conclusion, we think the Reconyx camera is a good choice for our research projects. It is a relatively expensive option and much cheaper cameras might acquire imagery that is suitable for your work. We’d be happy to hear if there are other options that have worked well for you. When it’s all said and done, the best advice we can offer is to create a stable mounting system that minimizes changes in the field of view. Otherwise, you will get a lot of camera movement and it will make life more difficult when you want to process your images!

GRIME2 Image-based Water Level Software: New Release

We have a new GRIME2 release. It is a bug-fix release that allows the program to run a little more quickly and use less disk space when the octagon target is used. We were creating unneeded debug information and images that have been removed.

Download and use the following installer to replace the previous software:

https://github.com/gaugecam-dev/GRIME2/releases/tag/v0.3.0.4-beta

https://github.com/gaugecam-dev/GRIME2/releases/tag/v0.3.0.5-beta

Water Level Camera: mini-octagon target test

This post describes the first testing of a mini-octagon calibration target for measuring water level with a camera and machine vision algorithms.

Image of three signs installed in a small stream. Each sign contains a blue octagon shape that is used to calibrate images for water level measurements.
Mini-octagon (center) is approximately eight inches across, leading to a much smaller footprint for the target background. The other two octagons in the image are printed on plexiglass backgrounds two feet in width. Image credit: Mary Harner

The original GaugeCam “bow-tie” calibration target was about three feet wide and four feet tall. This target yielded excellent calibration and precise water level readings. However, the size of the target is obtrusive in images and prohibitive at some sites.

The next generation calibration target, the “octagon target,” is approximately two feet wide. The benefits of the octagon are that (1) the target footprint is much smaller, and (2) the calibration target remains above the water line, so a calibration can be performed for every image. Calibrating each image is more robust because it accounts for camera movement, which is inevitable. The large octagon target performs on par with the original bow-tie calibration target, as shown in Ken Chapman’s dissertation.

Our goal with the mini-octagon is to reduce the target background to the minimal size required for robust calibration and water level measurement. The current size is larger than a traditional staff gauge but reasonable size for installation in many environments. Below you can see our field fabrication of the first mini-octagon, using a sheet of Coroplast, spray paint, and octagon stencil.

Initial tests show that our algorithms can detect the vertices of the mini-octagon in low-light conditions and under IR illumination.

Mini-octagon detection for image calibration. We are working to determine how much calibration precision is reduced by the smaller octagon.
The latest KOLA imagery can be found at https://apps.usgs.gov/hivis/camera/NE_Kearney_Outdoor_Learning_Area_RISE.

December 2023 GRIME Software Fans Update

Featured Photo

Photo credit: Mary Harner and Troy Gilmore, using a Platte Basin Timelapse (PBT) style camera on the South Branch Middle Loup River near Whitman, NE.

Updates

  • Congratulations to GRIME Lab team member Ken Chapman, who defended his  dissertation and will graduate this month. Great job, Ken!
  • GRIME-related Proposals: two full proposals and a preproposal that involved GRIME software were submitted in November and December.
  • Check out the latest updates on our blog and let us know if/how we can support you project.

Software Information

What is GRIME?

GRIME (GaugeCam Remote Imagery Manager Educational) is open-source commercial-friendly software (Apache 2.0) that enables ecohydrological research using ground-based time-lapse imagery. The first GRIME software for measuring water level with cameras was developed in François Birgand’s lab at North Carolina State University.

What are GRIME2 and GRIME-AI?

GRIME2 and GRIME-AI are the two desktop applications developed by Ken Chapman and John Stranzl, respectively.

Who is involved in the GRIME Lab?

See the growing list at the bottom of our home page: https://gaugecam.org/

We collaborate closely with Mary Harner at University of Nebraska at Kearney: https://witnessingwatersheds.com/

ITESM Collaboration: Data Fusion Project

Building on the successful WaterFront Software and KOLA Data Portal projects, we embarked on another student-led adventure in the Fall 2023 semester! Professor Elizabeth LĂłpez Ramos connected the GRIME Lab team with an excellent student team at TecnolĂłgico de Monterrey (ITESM). These students led the Data Fusion Project.

The Data Fusion Project is a first step toward integrating data fusion features in the GRIME-AI user interface. And the ITESM team dived DEEP into the software development life cycle on this one! As “clients” the GRIME Lab team had multiple meetings and filled out an extensive questionnaire. This made us really think through the requirements we desired. The ITESM team extensively documented this process, which is a major benefit to everyone going forward. Below are some screenshots from the ITESM team’s final presentation.

Functional requirements defined through client interviews, questionnaires and prototyping.
Other requirements identified.
Screenshot of live demo during the final presentation. The GUI was built using tkinter. CSV files can be loaded, data merged based on timestamps and data can be visualized.

The ITESM did a great job of working across campuses and completing a lot of behind-the-scenes work required to finish this project. Their project can be found on GitHub.

Overall, we are grateful for the opportunity to work with the ITESM Team. They were very professional and worked hard to create a viable product!

Many thanks to:

  • Carlos Eduardo Pinilla LĂłpez
  • Daniel Bakas Amuchástegui
  • JosĂ© David Herrera Portillo
  • Juan Carlos Ortiz de Montellano Bochelen
  • Karla Paola Ruiz GarcĂ­a
  • Romeo Alfonso Sánchez LĂłpez
  • VĂ­ctor Manuel GastĂ©lum Huitzil
Next steps identified by the ITESM team

GRIME-AI: A Quick Check on Time Required for Accessing and Processing USGS HIVIS Imagery

This video shows steps and time required for data download and image analysis of over 5,000 images from a USGS HIVIS site on the Elkhorn River in Nebraska. The process includes setting regions of interest (ROIs) and extraction of color and other scalar image features suitable for machine learning applications. This work was done on a laptop computer running GRIME-AI v0.0.3.8c-003.

PROCESSES COMPLETED:

• Data selection

• Imagery download

• Stage and discharge data download

• Image processing

• Image feature dataset created

• Ready for data fusion, then ML modeling

LAPTOP SPECIFICATIONS:

Intel i7-9850H @ 2.60GHz 2.59GHz

32 GB RAM

NVIDIA GeForce GTX 1650

Home fiber internet connection over Wifi

TIME REQUIRED:

The overall process took 1:04 hours, including all download and processing time. Extrapolating, this suggests about 4:15 hours required to download and process one year’s worth of imagery when working in my home office.

GRIME-AI Open-Source Software for Analysis of Ground-based TIme-lapse Imagery for Ecohydrological Science