The project folder on my computer is labeled “2021_GRIME_AI.” The date is a reminder of time and energy invested in GRIME AI. Our motivation has always been to enable others to extract information from imagery and we’re thrilled to share this software that facilitates the entire data science workflow, from data acquisition to visualization/exploration to model building and deployment.
John Stranzl is the lead developer of GRIME AI. Including the prototype he brought to the GRIME Lab, he has written almost every line of code. He had the vision to create the complete data science workflow, from data acquisition to model deployment, in GRIME AI.
Special credit goes to Mary Harner at University of Nebraska at Kearney. Mary’s connection with the Platte Basin Timelapse project and depth of experience with image-based projects for science and communication were foundational for GRIME AI. Her mentorship skills are unparalleled and a benefit to many students who have participated in GRIME Lab projects.
Ken Chapman, who developed the first GRIME software (GRIME2) and conducted the prototype study for GRIME AI workflow, deserves all kinds of credit as well. Without Ken’s relentless energy, expertise, and networking skills, we would never have connected with John or built GRIME AI.
This is the first public release of GRIME AI. As early testers, we’ve encountered a few “undocumented features”—but our beta testing experience confirms that the benefits GRIME AI delivers far outweigh any reason to delay its debut. This marks the beginning of something much bigger, and we’re thrilled to finally share GRIME AI with the world.
We have been fortunate to have support, financial and moral, from like-minded individuals and agencies. Thank you to Frank Engel, Keegan Johnson, Russ Lotspeich for the opportunity to work with USGS. Thank you to Marty Briggs for connecting us. And to the National Science Foundation for funding and collaborator Andrew Richardson (NAU/PhenoCam) for joining us on this journey. We are truly “living the dream” when we can match exciting projects with great collaborators and humans.
This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G23AC00141-00. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey. This material is based upon work supported by the National Science Foundation under Grant No. 2411065.
Deadman’s Run is a flashy urban stream. A distinct green color is prominent after storm events. The composite images above were constructed from a single morning (approximately 8:30 am) image from each day. The upper composite image was sliced so a beaver dam is visible in the foreground. The bottom image was sliced and cropped vertically to better display the color of the water in the creek.
Composite images and greenness index were all created using GRIME AI. Once the images were in hand, it took approximately 5 minutes to set up and run the composite slice and greenness index tools. The entire visual took about 15-20 minutes using GRIME AI, Excel and PowerPoint.
Two of the original images are shown below (both revealing an unfortunate amount of litter). More visuals and information on wildlife activity at this site can be found at https://go.unl.edu/dccstats.
Great horned owl on GRIME2 calibration target at Kearney Outdoor Learning Area.
We can’t help but share this image from one of our own Nebraska camera sites. Does this owl look magnificent, astute, angry, aloof, or all of the above?!
Featured Photo Information
Great Horned Owl at KOLA
Credit: Contributed by Mary Harner.
Location: Kearney, NE
We’re always looking for images we can feature monthly via this listserv, please share!
Featured Resource(article, database, etc.)
Congratulations to USGS and collaborators at Stevens Institute of Technology for their publication on automated ice detection!
Here are example test images from the Kearney Outdoor Learning Area (KOLA). This segmentation model was tuned as part of our USGS-funded project looking at 11 HIVIS sites.
IMAGE 1: Water segmentation on a test image using the KOLA-specific model. In other words, the model was trained at this site, but this image was not used in the training.
IMAGE 2: Water segmentation on a test image using the KOLA-specific model. This image comes from a second camera at this site. The camera that captured this image can be seen in IMAGE 1, above.
IMAGE 3: This camera was not used in any of the model training for this project. The segmentation was done using a general model that was trained across all 11 sites in the project. Note that grass in the lower left corner has been incorrectly identified as water, along with some rocks in the stream. But overall, the model performance is strong.
IMAGE 4: Under IR illumination at night, the model performance is good in terms of identifying the stream banks. The grass in the lower left is identified. The branch and other items in the stream are not classified correctly.
IMAGES 5, 6, and 7: Water segmentation with wildlife in the imagery.
This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G23AC00141-00. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey.
This week we created a basic user guide for GRIME2. Our next step is to work through some bugs we encountered along the way. More documentation to follow, including image timestamp handling and a how-to for using command line tools.
PhD student John Stranzl has been digging into The Color of Rivers (Gardner et al. 2021). This work is based on satellite remote sensing but is interesting to read with ground-based cameras in mind. The list of citing literature is also worth a look.
Gardner, J. R., Yang, X., Topp, S. N., Ross, M. R. V., Altenau, E. H., & Pavelsky, T. M. (2021). The Color of Rivers. Geophysical Research Letters, 48(1), e2020GL088946. https://doi.org/10.1029/2020GL088946
Featured Photo Information
Silhouettes of three scientists and a trusty Platte Basin Timelapse camera on a tea-colored Sandhills stream.
Credit: Troy Gilmore
Upcoming Events
Conference Sessions focused on image-based research
Invited lightning talks and panel at EPSCoR National Conference, Omaha, NE – Oct 13-16, 2024 (co-convener Mary Harner); afternoon of Oct 15, after keynote by Platte Basin Timelapse co-founder Mike Forsberg
Poster Session at AGU Annual Meeting, Washington, DC – Dec 9-13, 2024 (co-conveners Erfan Goharian, François Birgand, and Chris Terry)
We hope you’ll consider submitting an abstract to the session entitled “Using ground-based time-lapse imagery in ecohydrological studies: Data, software, and applications” at the AWRA/UCOWR/NIWR conference (https://awra.org/Members/Events_and_Education/Events/2024-Joint-Conference/2024_Joint_Abstracts.aspx; Topical Session Code = G). All are welcome! There is also an AI in Watershed Analysis session.
GRIME-AI features continue to expand. As part of the image triage (data cleaning) step, we now calculate and store image rotation (camera movement) information for each image.
We have had several new GRIME2 releases as we work with a group that is testing octagon targets at their river monitoring sites.
Feature Photo Information
The attached figures are composite images composed of pixel columns from 800 time-lapse images captured midday at an urban pond during 2020-2023. One composite was created using the center pixels from each image, showing ice, algae and vegetation. The other composite shows only vegetation from the far-right pixel columns of each image. Camera movement can easily be detected with these visualizations. Original images courtesy of Aaron Mittelstet and Platte Basin Timelapse. Visualization concept inspired in part by Andrew Richardson (PhenoCam).
This release makes the creation of CLI calls much easier. The ROI’s and other parameters you select in the GUI can be used to create CLI parameters and output them as test to the textbox below the main image. There are two “Create command line” buttons: one on the Calibration tab and one on the Find Line tab.
Planning ecological and/or hydrological research project using trail cams? If so, you might be wondering about which camera and mounting system to use. We have some ideas. But first, here are some helpful references from groups that have many years of experiences with camera traps and ecohydrological monitoring:
Get inspired by Platte Basin Timelapse’ artistic time-lapse camera network, oriented toward conservation storytelling in support of science: https://plattebasintimelapse.com/timelapses/
Honestly, the groups above have more experience installing time-lapse cameras than we do. That said, we have been learning and are happy to share the approach we are now using at stream monitoring sites like the Kearney Outdoor Learning Area (KOLA).
The Camera:
Our current preference is the Reconyx Hyperfire 2 Professional camera.
Why “Professional”? These cameras are $60 more than the standard Hyperfire 2. Based on the Reconyx comparison tool, here are key differences.
Reconyx Hyperfire (source: www.reconyx.com)
Greater range of video length options
More frame rate options
More trigger delay options
Motion sensor scheduling
More time-lapse intervals and surveillance modes
Greater range of ISO and nighttime shutter settings
Higher/lower operating temperatures
Optional external power connector
Option for custom focal system
Optional external trigger
Software with more options
The Security Enclosure:
A good lock and security enclosure are important for most sites. But we also like the Reconyx security enclosure for another reason: image stability. Minimizing camera movement is one of the most important considerations for effective monitoring! Of course, a security enclosure does not guarantee a perfectly stable camera. But we like the way the enclosure can be mounted in a permanent position and the camera can be removed for maintenance and placed back in the security enclosure without large translational or rotational shifts in the field of view. We have used other cameras and mounting systems where the camera and/or mount has to be loosened or removed when swapping the SD card and/or changing batteries. When we re-attach the camera and/or mount, it’s a guessing game as to whether we’ve returned the camera to a position that captures even a similar field of view.
Reconyx security enclosures as seen at https://www.reconyx.com/product/Security-Enclosure.
Other Accessories:
We are just now trying this heavy duty swivel mount: https://www.reconyx.com/product/Heavy-Duty-Swivel-Mount. We have heard good things about it and will update when we have more experience.
When using lithium AA batteries in a standalone camera, we have long camera run times. We are just getting acquainted with the cellular camera, which obviously requires more power. We have heard good things about Reconyx’s external power supply. It has a nice form factor, but it is pretty simple: a solar panel, charge controller, and replaceable battery. If you’re already great at setting up solar power and/or have the supplies in your lab, maybe you can save a little by doing it yourself. We’ll update here after we have some experience with this power supply: https://www.reconyx.com/product/solar-charger-10-watt.
Things we think you should NOT do:
Pretty please, do not just stick a t-post in the ground and attach a camera. You will get a lot of camera movement and it will make life more difficult when you want to process your images.
Do not just strap a camera on a tree. If you are using a tree and can’t use screws or lag bolts, then securely attach an enclosure (directly, or via swivel mount that is strapped to the tree). If you just strap the camera to a tree and then have to remove the strap and camera each time you swap an SD card and/or batteries, you will get a lot of camera movement and it will make life more difficult when you want to process your images.
In conclusion, we think the Reconyx camera is a good choice for our research projects. It is a relatively expensive option and much cheaper cameras might acquire imagery that is suitable for your work. We’d be happy to hear if there are other options that have worked well for you. When it’s all said and done, the best advice we can offer is to create a stable mounting system that minimizes changes in the field of view. Otherwise, you will get a lot of camera movement and it will make life more difficult when you want to process your images!