Property:CSDMS meeting abstract presentation
From CSDMS
This is a property of type Text.
P
In this clinic, we will provide a brief introduction to a selection of models (USGS and others), including FaSTMECH (2D/3D hydraulic) and PRMS (watershed hydrology), that have implemented a Basic Model Interface (BMI) and are available in the Python Modeling Toolkit (PyMT). We will interactively explore Jupyter Notebook examples of both stand-alone model operation and, as time permits, loosely coupled integrated modeling applications.
Participants will need a laptop with a web browser. Knowledge of Python, Jupyter Notebook, and hydrologic/hydraulic modeling is helpful, but not required. +
In this clinic, we will talk about diversity in a way that makes it approachable and actionable. We advocate that actions in support of diversity can happen at all career levels, so everyone who is interested can partake.
We will discuss concrete strategies and opportunities to help you bring a diverse research group together. Creating a diverse group can be through reaching out to undergraduate minority students to engage in undergraduate research experiences. This can be done ground-up, i.e. by graduate students in a mentoring role as productively as a faculty in a hiring role. We are all supervisors and mentors in our own ways.
We will highlight a number of approaches to engage with underrepresented minority students when recruiting new graduate students, and suggest some concrete adjustments of your recruitment processes to be as inclusive as possible.
But being proactive does not stop after recruitment. The clinic will have dedicated discussion time to engage in role play, and provide stories about situations in which you can be an ally. We will identify some pitfalls, ways to reclaim, and provide ideas for more inclusive meetings and mentoring.
Lastly, together we can work on creating an overview of current programs that focus on diversity and inclusion, to apply for funding to take action. +
In this clinic, we will use flow routing in models to determine various earth surface processes such as river incision and others. Landlab has several flow routing components that address multiflow-routing, depression-filling and the diversity of grid types. We'll see how to design a landscape evolution model with relatively rapid flow routing execution time on large grids. +
In this hands-on clinic, participants will simulate landslide runout and sediment transport patterns using the model MassWastingRunout (MWR). MWR is coded in Python and implemented as a component for the package Landlab. MWR uses runout algorithms typically found in landscape evolution and watershed sediment yield models to replicate the complex depositional and erosional behavior of actual landslides. Additional details on MWR be found here: https://esurf.copernicus.org/articles/12/1165/2024/.
By the end of the clinic, it is hoped that participants will have an understanding of how to setup and calibrate MWR to a field site. This clinic consists of a series of brief presentations followed by hands-on Jupyter notebook tutorials. It is divided into three sections: (1) MWR model conceptualization, behavior and limitations, illustrated on virtual terrains; (2) MWR performance at an actual field site and; (3) How to use MWR’s calibration utility to parameterize MWR to site specific landslide runout behavior.
At the end of the second and third sections, we will assist those who wish to set up MWR for their own field site or one of the other example field sites. Model inputs for the example field sites will be provided but can also be found at the link below. MWR can be run at most sites by preparing model inputs following the format of the example inputs.
Example model inputs: https://www.hydroshare.org/resource/55813a5e01764546b76641a7385c2236/ +
In this presentation several modeling efforts in Chesapeake Bay will be reviewed that highlight how we can use 3-dimensional, time-dependent hydrodynamic models to provide insight into biogeochemical and ecological processes in marine systems. Two modeling studies will be discussed which illustrate the application of individual based modeling approaches to simulate the impact of 3-dimensional currents and mixing on pelagic organisms and how these interact with behavior to determine the fate of planktonic species. There are many applications of this approach related to fish and invertebrate (e.g., oyster) larvae transport and fate and also plankton that can be used to inform management efforts.<br><br>A long-term operational modeling project will be discussed that combines mechanistic and empirical modeling approaches to provide nowcasts and short-term forecasts of Sea Nettles, HAB, pathogen and also physical and biogeochemical properties for research, management and public uses in Chesapeake Bay. This is a powerful technique can be expanded to any marine system that has a hydrodynamic model and any marine organism for which the habitat can be defined. <br><br>Finally, a new research project will be reviewed where we are assessing the readiness of a suite of existing estuarine community models for determining past, present and future hypoxia events within the Chesapeake Bay, in order to accelerate the transition of hypoxia model formulations and products from academic research to operational centers. This work, which will ultimately provide the ability to do operational oxygen modeling in Chesapeake Bay (e.g., oxygen weather forecasts), can be extended to other coastal water bodies and any biogeochemical property. +
In this presentation, James Byrne (Lead Research Software Engineer) and
Jonathan Smith (Principal Research Scientist) from the British Antarctic
Survey will be describing existing digital infrastructure projects and
developments happening in and around BAS. They will give a flavour of
how technology is influencing the development of environmental and polar
science, covering numerous research and operational domains. They will
be focusing on the digital infrastructure applied to IceNet, an AI-based
deep learning infrastructure. We will then show how generalized
approaches to digital infrastructure are being applied to other areas,
including cutting-edge Autonomous Marine Operations Planning (AMOP) capabilities.
We will end highlighting the challenges that need solving in working towards an Antarctic Digital Twin and how we might approach them. +
In this talk, I will discuss the need for low carbon and sustainable computing. The current emissions from computing are almost 4% of the world total. This is already more than emissions from the airline industry and ICT emissions are projected to rise steeply over the next two decades. By 2040 emissions from computing alone will account for more than half of the emissions budget to keep global warming below 1.5°C. Consequently, this growth in computing emissions is unsustainable. The emissions from production of computing devices exceed the emissions from operating them, so even if devices are more energy efficient producing more of them will make the emissions problem worse. Therefore we must extend the useful life of our computing devices. As a society we need to start treating computational resources as finite and precious, to be utilized only when necessary, and as effectively as possible. We need frugal computing: achieving our aims with less energy and material.
'''Additional links:'''<br>
* Blog posts:
** On climate cost of AI
*** https://wimvanderbauwhede.codeberg.page/articles/the-insatiable-hunger-of-openai/
*** https://wimvanderbauwhede.codeberg.page/articles/google-search-vs-chatgpt-emissions/
*** https://wimvanderbauwhede.codeberg.page/articles/climate-cost-of-ai-revolution/
** On Frugal Computing
*** https://wimvanderbauwhede.codeberg.page/articles/frugal-computing/
*** https://wimvanderbauwhede.codeberg.page/articles/frugal-computing-consumer/
*** https://wimvanderbauwhede.codeberg.page/articles/frugal-computing-developer/
* University web site with slides and videos of seminar talks
** https://www.gla.ac.uk/schools/computing/research/researchthemes/lowcarbon/
* Low Carbon Computing Learning and Teaching Resources
** https://codeberg.org/jgrizou/Low-Carbon-Computing-Teaching-Resources +
In this webinar, I will present a new framework termed “Bayesian Evidential Learning” (BEL) that streamlines the integration of these four components common to building Earth systems: data, model, prediction, decision. This idea is published in a new book: “Quantifying Uncertainty in Subsurface Systems” (Wiley-Blackwell, 2018) and applied to five real case studies in oil/gas, groundwater, contaminant remediation and geothermal energy. BEL is not a method, but a protocol based on Bayesianism that lead to the selection of relevant methods to solve complex modeling and decision problems. In that sense BEL, focuses on purpose-driven data collection and model-building. One of the important contributions of BEL is that is a data-scientific approach that circumvents complex inversion modeling relies on machine learning from Monte Carlo with falsified priors. The case studies illustrate how modeling time can be reduced from months to days, making it practical for large scale implementations. In this talk, I will provide an overview of BEL, how it relies on global sensitivity analysis, Monte Carlo, model falsification, prior elicitation and data scientific methods to implement the stated principle of its Bayesian philosophy. I will cover an extensive case study involving the managing of the groundwater system in Denmark. +
In this webinar, we will demonstrate how to make a contribution to a community open-source repository. Using a live demo, we will walk through the process, starting from making edits to your local copy of the source code, through to submitting them as a “pull request” and going through the review process. We will illustrate how to walk through the various steps: posting an issue, making a local code branch (and/or fork), running unit tests, pushing changes to the remote repository, creating a pull request, understanding results of Continuous Integration tests, and managing a code review.
'''Checklist from issue to pull request:'''
<hr>
'''Make an issue (if there isn't already one)'''<br>
It's good practice to start any change with an Issue on the Landlab GitHub repository's Issues tab.
* Sign on to GitHub
* Navigate to the Landlab repository
* Select the Issues tab, then the New Issue button
In the first example, I'm going to address Issue #2258 "Correct parameterization of DepthDependentLinearDiffuser"
'''Make a fork of the Landlab repository'''<br>
* Sign on to GitHub
* Navigate to https://github.com/landlab/landlab
* Click the '''Fork''' button to create a fork of the repository in your own GitHub space
'''Clone your fork to your local machine'''<br>
* Navigate to your forked version of the repository. For example, my GitHub username is gregtucker, so I navigate to https://github.com/gregtucker/landlab. This is my fork.
* Use the '''Code => Clone''' button to copy the clone address
* In a Terminal (i.e., UNIX shell) on your local machine, navigate to the folder that will contain the local version of your Landlab fork. Example for me:
cd ~/Documents/Talks/2025/Webinar-contributing-to-csdms
* Type git clone and then paste in the address you copied from GitHub. Example for me:
git clone git@github.com:gregtucker/landlab.git
'''Create a branch for your edits or additions'''<br>
Example: git checkout -b gt/update-ddepdiff-init
'''Make edits / additions on your local copy'''<br>
'''Run linters'''<br>
If you have changed or added code files and/or Jupyter notebooks, run the linting tools:
* black
* flake8
and the equivalents for Jupyter notebooks if relevant.
'''Make a "news frag" file'''<br>
To keep track of additions and changes, Landlab uses "news fragments": tiny little files that contain a record of each change. To add a news fragment for your additions/changes, navigate to the landlab/news folder. Create a file whose name is the Issue number that you are addressing, plus a file extension that has one of the following:
* bugfix for a bug fix
* doc for a documentation improvement
* feature for a new feature (such as a new component)
* removal for removal or deprecation of a public-facing feature
* misc for everything else
In this example, it's somewhere between a bug fix (it's not technically a bug, but could be misleading), a documentation improvement (it's no longer misleading!), and a removal (because we're getting rid of the old name of a parameter). Since it's not totally clear, I'll just call this misc. My example is responding to Issue #2258, so my news frag file will be:
2258.misc
Inside the file I'll put a one-line description, something like:
Correct the header doc and soil-velocity parameter name in DepthDependentLinearDiffuser.
'''Run the tests locally'''<br>
(See: https://landlab.csdms.io/install/developer_install.html#install)
From the main Landlab folder:
* Lint: nox -s lint (runs the linters to make sure code is clean and consistent)
* News frag: nox -s towncrier (checks to make sure there's a valid news frag)
* Code: nox -s test (this one takes a few minutes)
* Jupyter notebooks: nox -s test-notebooks (also takes a bit of time)
* Docs: nox -s build-docs (if you've changed any documentation, including inline)
Once all your tests are passing locally, you are ready to push to your remote fork.
'''Push to your remote fork'''<br>
Once you've made and committed your changes with git, and the tests are all passing locally, push your changes to your remote fork.
Example:
git push origin gt/update-ddepdiff-init
'''Activate a Pull Request'''<br>
Returning to your browser, check your fork's GitHub page. You should see a yellow banner with a message that reads something like "gt/update-ddepdiff-init had recent pushes 3 seconds ago", and a green button labeled "Compare & pull request".
Press this button to activate a pull request for your changes.
This takes you to the main Landlab repository page on GitHub. Here you can:
* Give your Pull Request a title
* Describe your Pull Request
* Change it to a DRAFT Pull Request (using the arrow on the green Create Pull Request button)
Once you've done those steps, click the green '''Draft pull request''' button. This will activate a series of automated tests.
'''What Makes a Good Pull Request'''<br>
* '''Small and focused''': Address one issue or feature per pull request. Avoid bundling multiple unrelated changes.
* '''Clear purpose''': The pull request should explain why a change is needed, not just what was changed.
* '''Draft first''': Open your pull request in '''draft mode''' so others can follow progress and provide early feedback.
* '''Readable history''': Prefer a clean commit history. Consider squashing or rebasing before final review.
* '''Well-tested and documented''': All code changes should be accompanied by appropriate tests and updated documentation.
'''Requesting a review'''<br>
Once your tests have all passed, it's time to get your Pull Request reviewed. In theory, a CSDMS staff member should be alerted to the newly submitted Pull Request, but best practice is to be proactive: request a review!
A good way to do this add a Comment (see bottom of your Pull Request page), in which you request a review(s).
If you're not sure who should review it, just write something like: "Could someone at CSDMS review this please?"
If you know of particular people that you would love to get a review from, go ahead and tag them by their GitHub username. For example, if I might ask for reviews from CSDMS team members Eric, Mark, and/or Tian, so my comment could be:
"Could @mcflugen, @mdpiper, and/or @gantian127 review this for me please?"
You can also request reviews from other community members - we highly encourage this!
'''How many reviews?'''<br>
Your code should be reviewed by at least one person on the development team for the repository in question. Ideally, it's nice to have two or even three, but we don't require this.
'''The review dance'''<br>
Just like a journal review, there's often some back-and-forth, with suggestions for (often minor) improvements. Fortunately, "Reviewer 2" won't show up, however - these are friendly reviews!
'''The final step'''<br>
Once you and the reviewer(s) are happy with your PR, a member of the development team (usually also a reviewer) will click the magic button that merges your PR into the main development version.
Time to celebration: your contribution is now official!
In this workshop we will explore publicly available socioeconomic and hydrologic datasets that can be used to inform riverine flood risks under present-day and future climate conditions. We will begin with a summary of different stakeholders’ requirements for understanding flood risk data, through the lens of our experience working with federal, state and local clients and stakeholders. We will then guide participants through the relevant data sources that we use to inform these studies, including FEMA floodplain maps, census data, building inventories, damage functions, and future projections of extreme hydrologic events. We will gather and synthesize some of these data sources, discuss how each data source can be used in impact analyses; and discuss the limitations of each available data source. We will conclude with a brainstorming session to discuss how the scientific community can better produce actionable information for community planners, floodplain managers, and other stakeholders who might face increasing riverine flood risks in the future. +
Increased computing power, high resolution imagery, new geologic dating techniques, and a more sophisticated comprehension of the geodynamic and geomorphic processes that shape our planet place us on the precipice of major breakthroughs in understanding links among tectonics and surface processes. In this talk, I will use University of Washington’s “M9 project” to highlight research progress and challenges in coupled tectonics and surface processes studies over both short (earthquake) and long (mountain range) timescales. A Cascadia earthquake of magnitude 9 (M9) would cause shaking, liquefaction, landslides and tsunamis from British Columbia to northern California. The M9 project explores this risk, resilience and the mechanics of Cascadia subduction. At the heart of the project are synthetic ground motions generated from 3D finite difference simulations for 50 earthquake scenarios including factors not previously considered, such as the distribution and timing of energy release on the fault, the coherent variation of frequency content of fault motion with fault depth, and the 3D effects of the deep basins along Puget Sound. Coseismic landslides, likely to number in the thousands, represent one of the greatest risks to the millions of people living in Cascadia. Utilizing the synthetic ground motions and a Newmark sliding block analysis, we compute the landscape response for different landslide failure modes. Because an M9 subduction earthquake is well known to have occurred just over 300 years ago, evidence of coseismic landslides triggered by this event should still be present in Washington and Oregon landscapes. We are systematically hunting for these landslides using a combination of radiocarbon dating and surface roughness analysis, a method first developed to study landslides near to the Oso 2014 disaster site, to develop more robust regional landslide chronologies to compare to model estimations. Resolved ground motions and hillslope response for a single earthquake can then be integrated into coupled landscape evolution and geodynamic models to consider the topographic and surface processes response to subduction over millions of years. This example demonstrates the power of an integrative, multidisciplinary approach to provide deeper insight into coupled tectonic and surface processes phenomena over a range of timescales.
Increasing physical complexity, spatial resolution, and technical coupling of numerical models for various earth systems require increasing computational resources, efficient code bases and tools for analysis, and community codevelopment. In these arenas, climate technology industries have leapfrogged academic and government science, particularly with regards to adoption of open community code and collaborative development and maintenance. In this talk, I will discuss industry coding practices I learned to bring into my workflow for efficient and rapid development, easier maintenance, collaboration and learning, and reproducibility. +
Individual-based vegetation models are essential for understanding and predicting ecosystem responses to environmental change. While these models rely on well-established process descriptions - such as vegetation establishment, growth and mortality - they are often developed from scratch, leading to inefficiencies. We present pyMANGA, an open-source, modular platform designed to streamline model development and enable systematic hypothesis testing. By allowing researchers to combine, modify and extend different concepts of plant growth, competition and resource dynamics, pyMANGA supports flexible, reproducible modelling. The platform is particularly suited to the study of ecohydrological interactions, including soil-plant feedback loops in coastal ecosystems. Transparency is ensured through open-source access, version control and automated benchmarking, while a structured review process fosters collaboration. Defined interfaces make it easy to compare models of varying complexity and abstraction, improving reproducibility and robustness. By providing an efficient and extensible framework, pyMANGA advances ecological modelling and improves decision making in environmental science. +
Interested in which variables influence your model outcome? SALib (Sensitivity Analysis Library) provides commonly used sensitivity analysis methods implemented in a Python programming language package. In this clinic we will use these methods with example models to apportion uncertainty in model output to model variables. We will use models built with the Landlab Earth-surface dynamics framework, but the analyses can be easily adapted for other model software. No prior experience with Landlab or Python is necessary. +
Introduction for the CSDMS 2020 annual meeting, presenting last years accomplishments and available resources for the community. +
Introduction for the CSDMS 2021 annual meeting +
Introduction to the Natural Hazard workshop +
It is now well established that the evolution of terrestrial species is highly impacted by long term topographic changes (e.g., high biodiversity in mountain ranges globally). Recent advances in landscape and biological models have opened the gate for deep investigation of the feedback between topographic changes and biological processes over millions of years timescale (e.g., dispersal, adaptation, speciation). In this clinic, we will use novel codes that couple biological processes with FastScape, a widely used landscape evolution model, and explore biological processes and speciation during and after mountain building under different magnitudes of tectonic rock uplift rates. We will explore and deduce how the magnitude and pace of mountain building impact biodiversity and how such interactions can be tracked in mountain ranges today. Python and Jupyter Notebook will be used in the clinic, and basic knowledge in python is desirable. +
It is well established that coupling and strong feedbacks may occur between solid Earth deformation and surface processes across a wide range of spatial and temporal scales. As both systems on their own encapsulate highly complex and nonlinear processes, fully-coupled simulations require advanced numerical techniques and a flexible platform to explore a multitude of scenarios. Here, we will demonstrate how the Advanced Solver for Problems in Earth's Convection and Tectonics (ASPECT) can be coupled with FastScape to examine feedbacks between lithospheric deformation and landscape evolution. The clinic will cover the fundamental equations being solved, how to design coupled simulations in ASPECT, and examples of coupled continental extension and landscape evolution. +
It is well established that coupling and strong feedbacks may occur between solid Earth deformation and surface processes across a wide range of spatial and temporal scales. As both systems on their own encapsulate highly complex and nonlinear processes, fully-coupled simulations require advanced numerical techniques and a flexible platform to explore a multitude of scenarios. Here, we will demonstrate how the Advanced Solver for Planetary Evolution, Convection, and Tectonics (ASPECT) can be coupled with FastScape to examine feedbacks between lithospheric deformation and landscape evolution. The clinic will cover the fundamental equations being solved, how to design coupled simulations in ASPECT, and examples of coupled continental extension and landscape evolution. Participants will be able to participate in live exercises through new online computing resources hosted by the Computational Infrastructure for Geodynamics (CIG). +
