PUBLIC RELEASE: GRIME AI Software for Ground-Based Time-Lapse Imagery

The project folder on my computer is labeled “2021_GRIME_AI.” The date is a reminder of time and energy invested in GRIME AI. Our motivation has always been to enable others to extract information from imagery and we’re thrilled to share this software that facilitates the entire data science workflow, from data acquisition to visualization/exploration to model building and deployment.

John Stranzl is the lead developer of GRIME AI. Including the prototype he brought to the GRIME Lab, he has written almost every line of code. He had the vision to create the complete data science workflow, from data acquisition to model deployment, in GRIME AI.

Special credit goes to Mary Harner at University of Nebraska at Kearney. Mary’s connection with the Platte Basin Timelapse project and depth of experience with image-based projects for science and communication were foundational for GRIME AI. Her mentorship skills are unparalleled and a benefit to many students who have participated in GRIME Lab projects.

Ken Chapman, who developed the first GRIME software (GRIME2) and conducted the prototype study for GRIME AI workflow, deserves all kinds of credit as well. Without Ken’s relentless energy, expertise, and networking skills, we would never have connected with John or built GRIME AI.

This is the first public release of GRIME AI. As early testers, we’ve encountered a few “undocumented features”—but our beta testing experience confirms that the benefits GRIME AI delivers far outweigh any reason to delay its debut. This marks the beginning of something much bigger, and we’re thrilled to finally share GRIME AI with the world.

Download the installer: Go.unl.edu/GRIMEAIUserForm

Visit the GRIME AI Wiki: Github.com/JohnStranzl/GRIME-AI/wiki

GitHub Repository: Github.com/JohnStranzl/GRIME-AI

We have been fortunate to have support, financial and moral, from like-minded individuals and agencies. Thank you to Frank Engel, Keegan Johnson, Russ Lotspeich for the opportunity to work with USGS. Thank you to Marty Briggs for connecting us. And to the National Science Foundation for funding and collaborator Andrew Richardson (NAU/PhenoCam) for joining us on this journey. We are truly “living the dream” when we can match exciting projects with great collaborators and humans.

U.S. Geological Survey Logo
This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G23AC00141-00. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey.
NSF Logo
This material is based upon work supported by the National Science Foundation under Grant No. 2411065.

New GRIME2 release with CLI generator

This release makes the creation of CLI calls much easier. The ROI’s and other parameters you select in the GUI can be used to create CLI parameters and output them as test to the textbox below the main image. There are two “Create command line” buttons: one on the Calibration tab and one on the Find Line tab.

https://github.com/gaugecam-dev/GRIME2/releases/tag/v0.3.0.8

Screenshot of the GRIME2 GUI showing the Create command line button and output in the Line Find tab.
Create command line button and output in the Line Find tab.

ITESM Collaboration: Data Fusion Project

Building on the successful WaterFront Software and KOLA Data Portal projects, we embarked on another student-led adventure in the Fall 2023 semester! Professor Elizabeth López Ramos connected the GRIME Lab team with an excellent student team at Tecnológico de Monterrey (ITESM). These students led the Data Fusion Project.

The Data Fusion Project is a first step toward integrating data fusion features in the GRIME-AI user interface. And the ITESM team dived DEEP into the software development life cycle on this one! As “clients” the GRIME Lab team had multiple meetings and filled out an extensive questionnaire. This made us really think through the requirements we desired. The ITESM team extensively documented this process, which is a major benefit to everyone going forward. Below are some screenshots from the ITESM team’s final presentation.

Functional requirements defined through client interviews, questionnaires and prototyping.
Other requirements identified.
Screenshot of live demo during the final presentation. The GUI was built using tkinter. CSV files can be loaded, data merged based on timestamps and data can be visualized.

The ITESM did a great job of working across campuses and completing a lot of behind-the-scenes work required to finish this project. Their project can be found on GitHub.

Overall, we are grateful for the opportunity to work with the ITESM Team. They were very professional and worked hard to create a viable product!

Many thanks to:

  • Carlos Eduardo Pinilla López
  • Daniel Bakas Amuchástegui
  • José David Herrera Portillo
  • Juan Carlos Ortiz de Montellano Bochelen
  • Karla Paola Ruiz García
  • Romeo Alfonso Sánchez López
  • Víctor Manuel Gastélum Huitzil
Next steps identified by the ITESM team

GRIME-AI v0.0.3.7 Camera Trap Software for Ecohydrology: Current and Future Features

GRIME-AI v0.0.3.7 image processing screen.

This post builds on our recent update about GRIME-AI capabilities. The previous post (and video) described features in GRIME-AI that are reasonably stable (although subject to occasional changes in APIs for public data sources). The description below is our current roadmap to a full GRIME-AI suite of tools for using imagery in ecohydrological studies. Please contact us if you see major gaps or are interested in helping us test the software as new features are developed!

The following features are implemented or planned for GRIME-AI:

You will notice asterisks that indicate *planned future functionality (timeframe = months to years) and **functionality under development (timeframe = weeks to months). All other features are developed, but subject to additional user testing as we work toward a stable public release. GRIME-AI is being developed open source commercial friendly (Apache 2.0).

  • Acquire PhenoCam imagery and paired NEON sensor data
  • Acquire USGS HIVIS imagery and paired stage, discharge and other sensor data
  • Data cleaning (image triage)
    • Automatically identify and remove low-information imagery
  • Data fusion*
    • Identify gaps in image and other sensor data*
    • Documented resolution of data gaps*
    • Documented data alignment criteria*
      • Choose precision for “paired” timestamps (e.g., +/- 5 min between image timestamp and other sensor data timestamp)*
  • Image analysis
    • Calculate and export scalar features for ML with low computational requirements
      • Image analysis algorithms include:
        • K-means color clustering (user selected, up to 8 clusters, HSV for each cluster)
        • Greenness index (PhenoCam approach)
        • Shannon Entropy
        • Intensity
        • Texture
    • Draw masks for training segmentation models**
      • Draw polygon shapes
      • Save masks and overlay images**
      • Export mask**
    • Image calibration and deterministic water level detection (currently a separate Windows installer called GRIME2, but we have command line to implement this in GRIME-AI)**
      • Draw calibration ROI for automatic detection of octagon calibration targets
      • Draw edge detection ROI for automatic detection of water edge
      • Enter reference water level and octagon facet length
      • Process image folders
        • Save overlay images
    • All scalar feature values, ROIs and polygon shapes exported as .csv and .json*
  • Data products and export*
    • Documentation of data source and user decisions, where final datasets include:
      • Metadata for all source data*
      • Documented user decisions from data cleaning and data fusion processes*
      • Documentation of calculated image features*
        • Sample image overlay showing location of ROIs*
        • Sample image showing segmentation masks and labels*
        • Coordinates and labels of all ROIs (.csv and .json)*
        • Breakpoints for major camera movement, image stabilization, or other major changes in imagery*
      • A .csv and a .json file with aligned, tidy data that is appropriate for training/testing ML models*
      • Metadata appropriate for storing final data product (scalar data only) in CUAHSI HydroShare or similar data repository*
      • Documentation of imagery source, including timestamps and metadata for original imagery retrieved from public archive*
  • Modeling and model outputs*
    • Build segmentation models (e.g., to automatically detect water surfaces in images)*
    • Build ML models from scalar image features and other sensor data and/or segmentation results*
    • Export model results, performance metrics, and metadata*

All of the above are being developed under the MESH development philosophy:

ITESM Capstone Collaboration: KOLA Data Portal

In Spring 2023 the GaugeCam team at the University of Nebraska worked with two excellent student groups on their capstone projects in the Departamento de Computación and Escuela de Ingeniería y Ciencias at Tecnológico de Monterrey (ITESM).

The first group we are featuring is the KOLA Data Portal Team. These students did an amazing job creating a web interface for multimodal environmental data! Professor Elizabeth López Ramos was the instructor for this capstone course.

This project was focused on creating a data portal for the Kearney Outdoor Learning Area (KOLA) site that is located next to the Kearney, NE High School. The project was designed to simulate real interaction with clients and included two phases.

PHASE 1: Gather list of client requirements and develop proposal.

  • Requirement gathering and analysis based on meetings with client.
  • Research and brainstorm to devise innovative and practical solutions.
  • Create a concise proposal outlining solutions, timeline, budget, and benefits for the client.
  • Present refined proposal to the client, highlighting alignment with requirements and receiving feedback.

PHASE 2: Build out the data ingestion platform based on the accepted proposal.

  • Construct platform, including features and functionality described in the proposal.
  • Test platform functionality, performance, and security to meet client requirements and industry standards.
  • Present the platform to the client for feedback and iterate to meet expectations and requirements.

A view and description of KOLA can be seen in the screenshot below.

A view of KOLA courtesy of https://outdoorclassne.com.

The key deliverables for the KOLA Portal project included the ability to upload and access several types of sensor data, including sound recordings, imagery, and scalar data (e.g., water levels). We met weekly with the KOLA Team. Students led those meetings, providing important updates and proposing next steps. As described in their final presentation, their solution consisted of the following:

The KOLA Portal has an attractive welcome screen, including a site map showing the various sensors that provide environmental data.

The green rectangle in the screenshot below highlights how we can now navigate from viewing the sensors, to adding a sensor, adding scalar data, and adding multimedia data on the platform.

The portal also allows us to view sample data we provided the team, as shown below.

The KOLA Team also provided excellent documentation of the whole project! This was provided in a summary handoff email at the end of the semester. The video below shows the User Manual for the portal. The team also provided (1) an API reference and (2) a deployment guide that walks the user through the process of setting up the environment, navigating the codebase, and deploying the portal with the Next.js framework and Vercel hosting platform.

Overall, the KOLA Data Portal Team were highly productive and very professional. We are very grateful to Professor Elizabeth López Ramos and Professor Gildardo Sánchez Ante for involving us in the course. We learned a lot in the process and would love to work with other ITESM students in the future!