BBC Article on Ice Sheet Sensors Link December 02009
dggridR
Spatial analyses involving binning requires that every bin have the same area, but this is impossible using a rectangular grid laid over the Earth or over any projection of the Earth. Discrete global grids use hexagons, triangles, and diamonds to overcome this issue, overlaying the Earth with equally-sized bins. This package provides utilities for working with discrete global grids, along with utilities to aid in plotting such data.
Webglobe
You want to understand your data, but it's spatially distributed and you're afraid that trying to make sense of it on something gross, like a Mercator projection, is going to lead you to bad intuitions. Webglobe can help you fix this! It allows you to interactively visualize your data on either a three-dimensional globe or a flat map.
RichDEM
RichDEM is a set of hydrologic analysis tools for use on digital elevation models (DEMs). RichDEM uses parallel processing and state of the art algorithms to quickly process even very large DEMs.
RichDEM can use both the D8 and D∞ (Tarboton) flow metrics. It can resolve terrain depressions (or pits) either by filling or by channel carving. It can calculate contributing/up-slope areas, slopes, curvatures, and aspects.
The present version is built in C++ for speed and uses OpenMP to achieve parallelism; future versions may use Intel's Thread Building Blocks to achieve additional increases in speed.
RichDEM has resulted in the following publications:
Parallel Non-divergent Flow Accumulation For Trillion Cell Digital Elevation Models On Desktops Or Clusters (Link)
Parallel Priority-Flood Depression Filling For Trillion Cell Digital Elevation Models On Desktops Or Clusters (Link)
An Efficient Assignment of Drainage Direction Over Flat Surfaces in Raster Digital Elevation Models (Link)
Priority-Flood: An Optimal Depression-Filling and Watershed-Labeling Algorithm for Digital Elevation Models (Link)
Distributed Parallel D8 Up-Slope Area Calculation in Digital Elevation Models (Link)
Grid Engine
grid_engine is a C++ class for flexibily working with different kinds of two-dimensional grids.
It can handle hexagonal, 4-connected, and 8-connected grids.
Any of these grids can be used as a toroid, such that the edges of the grid wrap around.
The same grid may be treated as a toroid, non-toroid, 4-, 8-, or hex-connected without needing to perform any modifications to the data structure. Toroidness and connectivity are not treated as fundamental aspects of a grid, but, rather, as artefacts of the way a grid is traversed.
Adjaceny neighbourhoods and visuals are generated with Python.
Climate Tracker
Climate Tracker is a simple model for visualizing historic climate data from the United States and Canada as tracks running across the landscape and through time.
Climate Tracker uses OpenLayers to display climate maps, aggregates data with C++, and uses Python and NumPy to fit the models. PHP routes respones from the client to the server.
The tracks Climate Tracker produces can be used to calculate climate velocities producing a simple means of understanding climate, its changes, and its interactions with the landcsape.
Grid Drafter
This Python program is used to draw and manage two-dimensional integer elevation grids.
It can handle both square and hexagonal grids.
When the program is first run, it will display a blank screen with a grid on it.
Pressing keys 0-8 will then allow you to fill the grid cells in with a colour which corresponds to that key.
Pressing “S” will save the grid file in a format which can be read back in by the program for further editing. The colours drawn previously will be saved as integer numbers.
I use this program for creating and editing small digital elevation models for testing with hydrologic algorithms.
SDR33 Triangulator
The Sokkia SDR33 data logger is used in surveying to record data from, and to control, certain total stations. Situations arise where a point to be surveyed cannot be directly accessed for surveying in the field; however, the location of such a point can be triangulated by moving the total station to two different locations and measuring the angles from each of these locations to the unknown point.
Since the SDR33 data logger does not have a triangulation option built in, an external program must be used to post-process the data. This program performs this processing. The program reads an unreduced SDR33 data file produced with Sokkia's ProLink software and returns the triangulated position of one or more points.
I have made relatively minor contributions to the following Open Source projects:
When wind blows over dry snow, it creates shapes known as snow bedforms. These features, which include snow dunes, waves, snow-steps and sastrugi, ornament Antarctica, Arctic sea ice, tundra, and mountain ridges. They change the reflectivity and average thermal conductivity of snow, and may change the patterns of snow accumulation and transport. Despite these effects, however, snow bedforms are poorly understood and not yet included in major snow or climate models. Here, we present a model which generates snow bedforms.
Spatial analyses involving binning often require that every bin have the same area, but this is impossible using a rectangular grid laid over the Earth or over any projection of the Earth. Discrete global grids use hexagons, triangles, and diamonds to overcome this issue, overlaying the Earth with equally-sized bins. Such discrete global grids are formed by tiling the faces of a polyhedron. Previously, the orientations of these polyhedra have been chosen to satisfy only simple criteria such as equatorial symmetry or minimizing the number of vertices intersecting landmasses. However, projection distortion and singularities in discrete global grids mean that such simple orientations may not be sufficient for all use cases. Here, I present an algorithm for finding suitable orientations; this involves solving a nonconvex optimization problem. As a side-effect of this study I show that Fuller’s Dymaxion map corresponds closely to one of the optimal orientations I find. I also give new high-accuracy calculations of the Poles of Inaccessibility, which show that Point Nemo, the Oceanic Pole of Inaccessibility, is 15 km farther from land than previously recognized.
Solving inverse problems, performing sensitivity analyses, and achieving statistical rigour in landscape evolution models requires running many model realizations. Parallel computation is necessary to achieve this in a reasonable time. However, no previous landscape evolution algorithm is able to leverage modern parallelism. Here, I describe an algorithm that can utilize the parallel potential of GPUs and many-core processors, in addition to working well in serial. The new algorithm runs 43x faster (70s vs. 3000s on a 10,000x10,000 input) than the previous state of the art and exhibits sublinear scaling with input size. I also identify key techniques for multiple flow direction routing and quickly eliminating landscape depressions and local minima. Complete, well-commented, easily adaptable source code for all versions of the algorithm is available on Github and Zenodo.
Continent-scale datasets challenge hydrological algorithms for processing digital elevation models. Flow accumulation is an important input for many such algorithms; here, I parallelize its calculation. The new algorithm works on one or many cores, or multiple machines, and can take advantage of large memories or cope with small ones. Unlike previous algorithms, the new algorithm guarantees a fixed number of memory access and communication events per raster cell. In testing, the new algorithm ran faster and used fewer resources than previous algorithms exhibiting ~30% strong and weak scaling efficiencies up to 48 cores and linear scaling across datasets ranging over three orders of magnitude. The largest dataset tested has two trillion (2*10^12) cells. With 48 cores, processing required 24 minutes wall-time (14.5 compute-hours). This test is three orders of magnitude larger than any previously performed in the literature. Complete, well-commented source code and correctness tests are available for download from Github.
For many taxa and systems, species richness peaks at mid-elevations. One potential explanation for this pattern is that large-scale changes in climate and geography have, over evolutionary time, selected for traits that are favored under conditions found in contemporary mid-elevation regions. To test this hypothesis, we used records of historical temperature and topographic changes over the past 65 Myr to construct a general simulation model of Plethodontid salamander evolution in eastern North America. We then explore possible mechanisms constraining species to mid-elevation bands by using the model to predict Plethodontid evolutionary history and contemporary geographic distributions. Our results show that models which incorporate both temperature and topographic changes are better able to predict these patterns, suggesting that both processes may have played an important role in driving Plethodontid evolution in the region. Additionally, our model (whose annotated source code is included as a supplement) represents a proof of concept to encourage future work that takes advantage of recent advances in computing power to combine models of ecology, evolution, and earth history to better explain the abundance and distribution of species over time.
Algorithms for extracting hydrologic features and properties from digital elevation models (DEMs) are challenged by large datasets, which often cannot fit within a computer's RAM. Depression filling is an important preconditioning step to many of these algorithms. Here, I present a new, linearly-scaling algorithm which parallelizes the Priority-Flood depression-filling algorithm by subdividing a DEM into tiles. Using a single-producer, multi-consumer design, the new algorithm works equally well on one core, multiple cores, or multiple machines and can take advantage of large memories or cope with small ones. Unlike previous algorithms, the new algorithm guarantees a fixed number of memory access and communication events per subdivision of the DEM. In comparison testing, this results in the new algorithm running generally faster while using fewer resources than previous algorithms. For moderately sized tiles, the algorithm exhibits ~60% strong and weak scaling efficiencies up to 48 cores, and linear time scaling across datasets ranging over three orders of magnitude. The largest dataset on which I run the algorithm has 2 trillion (2*1012) cells. With 48 cores, processing required 4.8 hours wall-time (9.3 compute-days). This test is three orders of magnitude larger than any previously performed in the literature. Complete, well-commented source code and correctness tests are available for download from a repository.
A Pipeline Strategy For Crop DomesticationL DeHaan, D VanTassel, J Anderson, S Asselin, R Barnes, G Baute, D Cattani, S Culman, K Dorn, B Hulke, M Kantar, S Larson, M Marks, A Miller, J Poland, D Ravetta, E Rude, M Ryan, D Wyse, X ZhangCrop Science (Volume 56, May–Jun 2016, Pages 1–14)PDFdoi: 10.2135/cropsci2015.06.0356
In the interest of diversifying the global food system, improving human nutrition, and making agriculture more sustainable, there have been many proposals to domesticate wild plants or complete the domestication of semi-domesticated “orphan” crops. However, very few new crops have recently been fully domesticated. Many wild plants have traits limiting their production or consumption that could be costly and slow to change. Others may have fortuitous pre-adaptations that make them easier to develop or feasible as high-value, albeit low-yielding, crops. To increase success in contemporary domestication of new crops, we propose a “pipeline” approach, with attrition expected as species advance through the pipeline. We list criteria for ranking domestication candidates to help enrich the starting pool with more pre-adapted, promising species. We also discuss strategies for prioritizing initial research efforts once the candidates have been selected: developing higher value products and services from the crop, increasing yield potential, and focusing on overcoming undesirable traits. Finally, we present new-crop case studies which demonstrate that wild species' limitations and potential (in agronomic culture, shattering, seed size, harvest, cleaning, hybridization, etc.) are often only revealed during the early phases of domestication. When nearly insurmountable barriers were reached in some species, they have been (at least temporarily) eliminated from the pipeline. Conversely, a few species have moved quickly through the pipeline as hurdles such as low seed weight or low seed number per head were rapidly overcome, leading to increased confidence, farmer collaboration, and program expansion.
Over the last half-century, crop breeding and agronomic advances have dramatically enhanced yields in temperate summer-annual cropping systems. Now, diversification of these cropping systems is emerging as a strategy for sustainable intensification, potentially increasing both crop production and resource conservation. In temperate zones, diversification is largely based on the introduction of winter-annual and perennial crops at spatial and temporal locations in annual-crop production systems that efficiently increase production and resource conservation. Germplasm development will be critical to this strategy, but we contend that to be feasible and efficient, germplasm improvement must be closely integrated with commercialization of these crops. To accomplish this integration, we propose a novel approach to germplasm development: the reflective plant breeding paradigm (RPBP). Our approach is enabled by developments in genomics, agroecosystem management, and innovation theory and practice. These developments and new plant-breeding technologies (e.g., low-cost sequencing, phenotyping, and spatial modeling of agroecosystems) now enable germplasm development to proceed on a time scale that enables close coordination of breeding and commercialization (i.e, development of cost-effective production systems and supply–value chains for end-use markets). The RPBP approach is based on close coordination of germplasm development with enterprise development. In addition to supporting strategic diversification of current annual-cropping systems, the RPBP may be useful in rapid adaptation of agriculture to climate change. Finally, the RPBP may offer a novel and distinctive pathway for future development of the public plant-breeding programs of land-grant universities with implications for graduate education for public- and private-sector plant breeders.
In processing raster digital elevation models (DEMs) it is often necessary to assign drainage directions over flats—that is, over regions with no local elevation gradient. This paper presents an approach to drainage direction assignment which is not restricted by a flat's shape, number of outlets, or surrounding topography. Flow is modeled by superimposing a gradient away from higher terrain with a gradient towards lower terrain resulting in a drainage field exhibiting flow convergence, an improvement over methods which produce regions of parallel flow. This approach builds on previous work by Garbrecht and Martz (1997), but presents several important improvements. The improved algorithm guarantees that flats are only resolved if they have outlets. The algorithm does not require iterative application; a single pass is sufficient to resolve all flats. The algorithm presents a clear strategy for identifying flats and their boundaries. The algorithm is not susceptible to loss of floating-point precision. Furthermore, the algorithm is efficient, operating in O(N) time whereas the older algorithm operates in O(N3/2) time. In testing, the improved algorithm ran 6.5 times faster than the old for a 100x100 cell flat and 69 times faster for a 700x700 cell flat. In tests on actual DEMs, the improved algorithm finished its processing 38–110 times sooner while running on a single processor than a parallel implementation of the old algorithm did while running on 16 processors. The improved algorithm is an optimal, accurate, easy-to-implement drop-in replacement for the original. Pseudocode is provided in the paper and working source code is provided in the Supplemental Materials.
Depressions (or pits) are low areas within a digital elevation model that are surrounded by higher terrain, with no outlet to lower areas. Filling them so they are level, as fluid would fill them if the terrain were impermeable, is often necessary in preprocessing DEMs. The depression-filling algorithm presented here—called Priority-Flood—unifies and improves on the work of a number of previous authors who have published similar algorithms. The algorithm operates by flooding DEMs inwards from their edges using a priority queue to determine the next cell to be flooded. The resultant DEM has no depressions or digital dams: every cell is guaranteed to drain. The algorithm is optimal for both integer and floating-point data, working in O(n) and O(n log2 n) time, respectively. It is shown that by using a plain queue to fill depressions once they have been found, an O(m log2 m) time-complexity can be achieved, where m does not exceed the number of cells n. This is the lowest time complexity of any known floating-point depression-filling algorithm. In testing, this improved variation of the algorithm performed up to 37% faster than the original. Additionally, a parallel version of an older, but widely-used depression-filling algorithm required six parallel processors to achieve a run-time on par with what the newer algorithm's improved variation took on a single processor. The Priority-Flood Algorithm is simple to understand and implement: the included pseudocode is only 20 lines and the included C++ reference implementation is under a hundred lines. The algorithm can work on irregular meshes as well as 4-, 6-, 8-, and n-connected grids. It can also be adapted to label watersheds and determine flow directions through either incremental elevation changes or depression carving. In the case of incremental elevation changes, the algorithm includes safety checks not present in other algorithms.
Bovine spongiform encephalopathy, otherwise known as mad cow disease, can spread when an individual cow consumes feed containing the infected tissues of another individual, forming a one-species feedback loop. Such feedback is the primary means of transmission for BSE during epidemic conditions. Following outbreaks in the European Union and elsewhere, many governments enacted legislation designed to limit the spread of such diseases via elimination or reduction of one-species feedback loops in agricultural systems. However, two-species feedback loops—those in which infectious material from one-species is consumed by a secondary species whose tissue is then consumed by the first species—were not universally prohibited and have not been studied before. Here we present a basic ecological disease model which examines the rôle feedback loops may play in the spread of BSE and related diseases. Our model shows that there are critical thresholds between the infection's expansion and decrease related to the lifespan of the hosts, the growth rate of the prions, and the amount of prions circulating between hosts. The ecological disease dynamics can be intrinsically oscillatory, having outbreaks as well as refractory periods which can make it appear that the disease is under control while it is still increasing. We show that non-susceptible species that have been intentionally inserted into a feedback loop to stop the spread of disease do not, strictly by themselves, guarantee its control, though they may give that appearance by increasing the refractory period of an epidemic's oscillations. We suggest ways in which age-related dynamics and cross-species coupling should be considered in continuing evaluations aimed at maintaining a safe food supply.
This briefing describes the first deployment of a new electronic tracer (E-tracer) for
obtaining along-flowpath measurements in subsurface hydrological systems. These
low-cost, wireless sensor platforms were deployed into moulins on the Greenland
Ice Sheet. After descending into the moulin, the tracers travelled through the
subglacial drainage system before emerging at the glacier portal. They are capable
of collecting along-flowpath data from the point of injection until detection. The
E-tracers emit a radio frequency signal, which enables sensor identification,
location and recovery from the proglacial plain. The second generation of
prototype E-tracers recorded water pressure, but the robust sensor design provides
a versatile platform for measuring a range of parameters, including temperature
and electrical conductivity, in hydrological environments that are challenging to
monitor using tethered sensors.
The availability of high-resolution, continent-scale digital elevation models will enable new science. However, analyzing such data requires the development of new algorithms that can handle long-range spatial dependencies. Such algorithms should ideally benefit users with supercomputers as well as users with modest resources. Here, I present evidence that this can be accomplished and highlight possible paths forward.
On many maps, Greenland appears large enough to be a continent. That's because putting a grid on a sphere is hard. More generally, spatial analyses involving binning often require that every bin have the same area, but this is impossible using a rectangular grid laid over the Earth, or over any projection of the Earth. Further, the cells of rectangular grids are not equidistant and adjacent cells may be joined by either lines or points. In the past, these issues have been addressed by sweating over the esoteric details of projections. But what if there was a single family of projections that could solve all of these problems? Discrete global grids use hexagons, triangles, and diamonds to overcome such issues, overlaying the Earth with equally-sized bins. However, despite their utility, it has only been recently that packages have become available for managing discrete global grids. This talk will provide a primer on discrete global grids, their uses, standardization efforts, and examples of these emerging packages in action.
Agricultural production has increased greatly over the past century, but gains have often come at the cost of long-term sustainability. Crop systems often require fossil fuel-based fertilizers, strain sources of fresh water, contribute to soil loss, and may ultimately reduce arable land. Addressing these shortfalls is essential for future food production, especially in the face of an increasing global population. Perennial crops offer a possible alternative to the annuals upon which current agriculture systems are based. They sequester nutrients and may reduce both soil erosion and the need for tilling. Additionally, because perennial grains have reduced input costs, they may equal the profitability of an annual grain even while producing lower yields.
However, despite their potential, relatively little theoretical research has been done on high-yielding perennial grains. This may be partly because they have not been found in nature. But it may also be a consequence of theories which predict mutually exclusive trade-offs between longevity and seed production. Whatever the case, the controlled conditions of an agriculture system present a novel selective regime which can be studied in its own right and exploited to develop life cycle strategies not possible elsewhere.
Results/Conclusions
Accordingly, we have developed a physiological model of resource allocation within a grain species. The allocation functions themselves are mutable. This permits virtual breeding of modeled grains in order to explore the model's "gene space" and to locate optimal plants for a given set of harvest conditions.
The model is observed to rapidly produce both annual and perennial solutions following a random initialization. Both annuals and perennials may be bred to produce a perennial or annual, respectively. Perennial seed production in the model has been observed to equal or surpass that of annuals under some conditions. Insofar as the model is representative of reality, the implication is that high-yielding perennial grains may be bred in the real world, and that they may offer a competitive alternative to annuals.
Agricultural regions of the world have vast natural watersheds of lakes, ponds, rivers, and streams. Interwoven are large artificial watersheds of drain tiles and ditches. Just as household plants benefit from drains in their pots, so agricultural crops benefit from drain tiles in their fields. In today's world the drain tiles carry not only water but also fertilizers, antibiotics, pesticides, hormones, and other chemicals into the natural watershed. We analyzed where wetlands and buffers for bioenergy and other applications could be placed on the Minnesota landscape to intercept drain waters and help purify them before they reach the natural watershed.
Numerous studies have shown electromyographic signals (EMGs) as useful for controlling prostheses and ortheses. Their great potential stems from the degree of voluntary control we wield over these signals, even if limbs are missing, especially in the EMGs of skeletal muscles. However, despite several decades of exploration, the potential of electrical myoimpedance (EMI) as a separate control signal has been largely ignored. The greatest barrier to utilizing this signal is that skin-based measurements suffer from low-sensitivity with the only alternative being invasive methods, such as needles.
Since the EMI signal is known to correlate with the passive properties of materials under test, specifically their geometry, it is expected to be highly sensitive to the morphologic changes which occur during concentric contractions. In contrast, EMG signals occur any time a muscle is contracted, whether or not this results in morphologic changes. Therefore, the EMI and EMG signals should not be highly-correlated, suggesting an additional control channel.
To test this, we developed a non-invasive procedure for making simultaneous skin-based measurements of EMG and EMI signals during both concentric and isometric contractions. A video camera was synchronized with the measurement system to facilitate the correlation of signal features with muscle actions. We conclude that the two signals can be distinguished.
Perennial possibilitiesR. Barnes, C. Lehman, M. Kantar, L. DeHaan, D. Wyse2013 U of MN Student Sustainability Symposium Lightning Talks (4/5)Slides
Agricultural production has increased greatly over the past century, but gains have often come at the cost of long-term sustainability. Crop systems often require fossil fuel-based fertilizers, strain or deplete sources of fresh water, contribute to soil loss, and may ultimately reduce arable land. Addressing these shortfalls is essential for future food production, especially in the face of an increasing global population. Perennial crops offer an attractive alternative to the annuals upon which current agriculture systems are based. They sequester nutrients and may reduce both soil erosion and the need for tilling. Additionally, because perennial grains have reduced input costs, they may equal the profitability of an annual grain even if they ultimately yield less seed.
However, despite their potential, relatively little research has been done on high-yielding perennial grains. This may be partly because they have not been found in nature. But it may also be a consequence of theories which predict mutually exclusive trade-offs between longevity and seed production. Whatever the case, the controlled conditions of an agriculture system present a novel selective regime which can be studied in its own right and exploited to develop life cycle strategies not possible elsewhere in nature.
Accordingly, we have developed a physiologic model of resource allocation within a grain and parameterized this model based on actual environmental data and crop mass ratios. The allocation functions themselves are mutable. This permits the breeding of modeled grains in order to explore the model's "gene space" and to locate optimal plants for a given set of harvest conditions. My talk will discuss our model as well as some promising preliminary findings indicating that perennial grains may yet be a possibility.
Interwoven with the natural watersheds of the United States—made up of lakes, ponds, and rivers—are large artificial watersheds of drain tiles and ditches which prevent fields from becoming too damp by draining them of excess water. The drained water carries fertilizers, antibiotics, pesticides, hormones, and other chemicals into natural watersheds. Fortunately, wetlands and vegetative buffers can provide ecosystem services to filter and absorb such chemicals, leaving natural waterways clean. Additionally, the filtering vegetation can be used as a biofuel source. This project has used public databases, coupled with efficient new GIS algorithms which reduce processing times from days to minutes, to analyse the landscape and automatically determine wetlands suitable for rehabilitation. As a result, it is possible to make policy recommendations on state-wide scales, while the ever-evolving nature of these artificial watersheds reduces the cost of interventions. This talk explains the project, presenting the algorithms, processes, and data sources used, and discussing the data we have available for understanding the practicalities of resulting solutions.
Intractable problems in modeling ecosystems and their services can often be solved through the application of appropriate algorithms: reconceptualization of a problem may reduce computation times from days to minutes. Despite this, many popular analysis packages—and many modelers—use inefficient algorithms. For example, statewide DEMs have reached 1m resolutions, resulting in datasets on the terabyte level. Appropriately used, such data has a tremendous capacity to inform and impact ecosystem management and policy, yet processing single watersheds is still considered onerous by those using traditional GIS algorithms. We overview efficient algorithms which have allowed us to automatically identify remediable wetlands across thousands of square miles of Minnesota. The same techniques which permit such processing have analogues in the realm of disease modeling, enabling us to track the fates of each individual in a population of millions. Tuberculosis, mad cow disease, competition among grains, the flow in a watershed, and the decline of insect pollinators all have the same computational requirements which can be addressed through the judicious use of old, new, and under-utilized algorithms. As above, this talk addresses our experience approaching a range of problems and problem domains with an eye towards the commonalities and practical take-aways for attendees.
Given vast increases in computing capacity, applications in science and engineering that were formerly interpreted with ordinary or partial differential equations, or by integro-partial differential equations, can now be understood through microscale modeling. Interactions among individual particles—be they molecules, viruses, or individual humans—are modeled directly, rather than first abstracting the interactions into mathematical equations and then simulating the equations. One approach to microscale modeling involves scheduling all events into the future, wherever that is possible. With sufficient space-for-time tradeoffs, this considerably improves the speed of the simulation, but requires scheduling algorithms of high efficiency. In this paper we describe our variation on calendar queues and their usage, presenting detailed algorithms, intuitive explanations of the methods, and notes from our experiences applying them in large-scale simulations. Results can be useful to scientists in ecology, epidemiology, economics, and other disciplines that employ microscale modeling.
This paper presents a parallel algorithm for calculating the eight-directional (D8) up-slope contributing area in digital elevation models (DEMs). In contrast with previous algorithms, which have potentially unbounded inter-node communications, the algorithm presented here realizes strict bounds on the number of inter-node communications. Those bounds in turn allow D8 attributes to be processed for arbitrarily large DEMs on hardware ranging from average desktops to supercomputers. The algorithm can use the OpenMP and MPI parallel computing models, either in combination or separately. It partitions the DEM between slave nodes, calculates an internal up-slope area by replacing information from other slaves with variables representing unknown quantities, passes the results on to a master node which combines all the slaves' data, and passes information back to each slave, which then computes its final result. In this way each slave's DEM partition is treated as a simple unit in the DEM as a whole and only two communications take place per node.
A significant hurdle to the understanding of ice sheet basal hydrology and
its coupling with ice motion is the difficulty in making in-situ measurements
along a flow path. While dye tracing techniques may be used in small glaciers
to determine transit times of surface melt water through the sub-glacial
system, they provide no information on in situ conditions (e.g. pressure) and
are ineffective at ice-sheet scale where dilution is high. The use of tethered
sensor packages is complicated by the long lengths (~100’s m) and torturous
path of the moulins and conduits within ice sheets. Recent attempts to pass
solid objects (rubber ducks) and other sensor packages through glacial moulins
have confirmed the difficultly in deploying sensors into the sub glacial
environment. Here, we report the first successful deployment and recovery of
compact, electronic units to moulins up to 7 km from the margin of a large
land-terminating Greenland outlet. The technique uses RF (Radio Frequency)
location to create an electronic tracer (an ‘e-tracer’) enabling a data-logging
sensor package to be located in the pro-glacial flood plain once it has passed
through the ice sheet. A number of individual packages are used in each
deployment mitigating for the risk that some may become stuck within the
moulin or lodge in an inaccessible part of the floodplain. In preliminary tests on
the Leverett glacier in West Greenland during August 2009 we have
demonstrated that this technique can be used to locate and retrieve dummy
sensor packages: 50% and 20% of the dummy sensor packages introduced to
moulins at 1 and 7 km from the ice sheet terminus respectively, emerged in the
sub-glacial stream. It was possible to effectively detect the e-tracer units
(which broadcast on 151MHz with 10mW of power) over a horizontal range of
up to 5km across the pro-glacial floodplain and locate them to a high accuracy,
allowing visual recognition and manual recovery. These performance statistics
give this technique strong potential for investigating in-situ conditions along a
flow path at ice sheet scale.
Mendenhall Glacier is a temperate maritime glacier that has undergone nearly continuous retreat since the end of the Little Ice Age circa 1750 AD in the Juneau area. Twenty first century recession rates have increased relative to the late 1940's when the entire terminus was in contact with this pro-glacial lake. Accelerated glacier ice thinning and ablation through lake calving events have been monitored over the last decade. Bathymetric data collected from a basin just south of the current lake-front terminus between 2004 and 2008 shows sediment infilling with the maximum depth shallowing from 81.7 meters below mean sea level in 2004 to 77 m below mean sea level in 2008. During this time the footprint of Mendenhall Lake has expanded to the north following terminus recession, exposing an LGM cirque basin. Lake basin morphometry was first measured beginning in 1973 by fisheries biologists in the Alaska Department of Fish and Game. Since 2000, regular bathymetric surveys of the lake have been conducted. By combining lake discharge measurements with total suspended sediment data from the Mendenhall River, the total volume of suspended sediment discharged by the Mendenhall Glacier into its lake is estimated.
We present the outline of a class project in which entry-level students in our CS1 course spent a full day developing a text-based massively multi-player online role-playing game (MMORPG) using Scheme. We describe briefly our CS1 course, the specifics of the game we asked students to implement, and the project organization. Comments from the students about their experience are also presented. Most students felt that the project was a beneficial learning experience. The project was organized as part of a larger multi-year effort to increase student learning and student participation. Class performance shows that more students have completed the course and have obtained higher grades than in the past, providing support to the educational value of this project and the other active learning opportunities we have used during the semester.
Peaks in species richness at mid-elevation bands have been observed in ecosystems and taxa around the globe. A number of ecological processes may contribute to this including varying autotrophic productivity, tradeoffs between competitive ability and environmental tolerance, and differences in area and isolation. Evolutionary processes have also been suggested; however, such explanations are difficult to support, as it is often unclear how speciation and extinction rates have changed over time.
Here, we use records of historical temperature and topographic changes over the past 65 Myr to construct an agent-based simulation model of Plethodontid salamander evolution in eastern North America. We then explore possible mechanisms constraining species to mid-elevation bands by using the model to predict Plethodontid evolutionary history and contemporary geographic distributions.
Our results show that models which incorporate both temperature and topographic changes are better able to predict observed patterns, suggesting that both processes may have played an important role in driving Plethodontid evolution in the region. Additionally, our model represents a proof of concept to encourage future work that takes advantage of recent advances in computing power to combine models of ecology, evolution, and earth history to better explain the abundance and distribution of species over time.
The United States is home to a vast natural watersystem of rivers and lakes. Interwoven is a large artificial system of drain tiles and ditches. The porous boundary between these systems transmits fertilizers, antibiotics, pesticides, hormones, and other chemicals. In addition, erosion may lead to topsoil loss. Funding for mitigation, rehabilitation, and conservation is often allocated at the state or national level, yet the system is so complex, and the data so large, that existing tools cannot analyze it at this scale. The goal, then, is to produce new algorithms which will enable GIS tools to quickly analyze terabytes or more of data. This, in turn, will facilitate landscape optimization.
This project is to analyze where wetlands and other vegetated buffers can be placed on the landscape to intercept drain waters and help purify them before they reach the natural watershed. The computational problem comes because new LIDAR images have expanded the resolution of geographic digital elevation models (DEMs) up to a thousandfold or more. This in turn has taxed the ability of existing algorithms to process the expanded datasets. Here we explain the project and present new efficient algorithms for parallel and scalar processing that reduce run-times from days on ordinary computers to minutes or second using the new algorithms in a parallel supercomputing environment.
“Ecology of star clusters” examines the interactions of stars, and what can be gleaned from those interactions about the evolution and ultimate fate of star clusters large and small. Questions like: 1) What is the force that causes small star clusters to expand? 2) Why do the ancient globular clusters have many binary stars? 3) Why are close triple stars so rare?
Typical pathogens are minuscule and immobile and therefore need external means of transferring from host to host. Many manifestations of disease are merely mechanisms to effect such transfer, with some of the more elaborate mechanisms appearing in vector-borne diseases. In the simplest case, the disease infects two hosts alternately, one being small and mobile, the other larger. In vector-borne diseases such as malaria, the pathogen is relatively non-virulent in one host (the mosquito vector) and more virulent in the other (the vertebrate host). Non-virulence in the vector clearly favors transmission of the pathogen, as the vector remains relatively healthy to move about and propagate the pathogen. But how did such systems evolve? Here we show that their evolution is a natural consequence of the dynamics of diseases which alternate between hosts, where selection pressures on the pathogen tend to decrease the difference in infectivity between the hosts but increase the difference in virulence. The maximal difference in virulence occurs when a disease becomes effectively non-virulent in one of the hosts. When this occurs in the smaller of the two hosts, it is simply referred to as a vector-borne disease. The selection pressures remain consistent from emergence near the disease-free equilibrium through high-prevalence at interior equilibria, as indicated by terms in the eigenvalues of broad macroscale models and by detailed microscale simulations. Curiously, similar dynamics appear in sexually transmitted diseases when they alternate between male and female, where such diseases can become comparatively non-virulent in one of the two sexes. The theory we present can shed light on the natural history of vector-borne and sexually transmitted diseases, and can provide information for their treatment and amelioration.
Climate change has profound implications for the sustainability of society and the environment, yet estimates of climate change cover times scales which make results difficult to verify, are often computationally expensive to make, and have uncertainties which are not easily communicated, especially outside the area of computational meteorology and mathematics. We present a method of quantifying climate change over the past century and into the near‐future which bypasses many of these problems. Using historical weather data and a surface‐fitting algorithm, we are able to extract "climate velocities", representing the surface speed and direction of the climate for any location. Projections from these velocities can be used to extract possible future locations and direction‐of‐movement of biomes, biofuel hotspots, and agricultural productivity, with implications for conservation parkways, preemptive revegetation, agricultural policy.
Electromagnetic VLF emissions banded in frequency, coincident with warm energy-banded ions in the low latitude auroral zone, and associated with very strong geomagnetic storms, are observed separately on two low-earth polar orbiting satellites, FAST and DEMETER. Both satellites carry a full complement of field and particle detectors. The FAST satellite, launched August 21, 1996 into an elliptical polar orbit with perigee 350 km and apogee 4175 km, traversed the auroral zone four times per orbit across a wide range of altitudes and local times. The DEMETER satellite was launched on June 29, 2004 into a circular sun-synchronous polar orbit at altitude 710 km, with data recorded at all invariant latitudes less than ~65 degrees. The ion bands were first reported in association with the Halloween storms [Cattell et al., 2004; Kozyra et al., 2004, Yao et al., 2008]. Banded ions are observed on FAST during every large magnetic storm in discrete energy bands at energies ~10 eV – 10 keV and lasting up to 12 hrs. The energy flux peaks in the trapped population but is also evident in the precipitating ions, and in certain cases a significant upgoing ion component appears at low invariant latitudes. These bands were observed over several orbits at similar latitudes in both dawn and evening sectors, with the signature typically more pronounced in the dawn sector. In this study we focus on the coincidence of the energy-banded ions with observations of frequency-banded VLF electromagnetic emissions. During all of these very large storms, banded VLF emissions are evident in both the electric and magnetic field, appearing as discrete frequency bands between ~100 and ~1500 Hz separated by 75–150 Hz. These banded emissions persist for several FAST or DEMETER orbits, lasting up to 10 hrs, in both the northern and southern hemispheres. There appears to be a correlation between the banded wave observations and ion and electron density enhancements. Possible generation mechanisms for the banded emissions include EMIC waves generated in the equatorial ring current region which bounce to higher L-shells and propagate down auroral field lines to the spacecraft location.
Agricultural production has increased greatly over the past century, but gains have often come at the cost of long-term sustainability. Crop systems often require fossil fuel-based fertilizers, strain sources of fresh water, contribute to soil loss, and may ultimately reduce arable land. Addressing these shortfalls is essential for future food production, especially in the face of an increasing global population. Perennial crops offer a possible alternative to the annuals upon which current agriculture systems are based. They sequester nutrients and may reduce both soil erosion and the need for tilling. Additionally, because perennial grains have reduced input costs, they may equal the profitability of an annual grain even while producing lower yields.
However, despite their potential, high-yielding perennial grains remain contentious: it is unclear whether their production potential can be increased to levels comparable to annual grains. Here, that high-yielding grains, if present in nature, would exist in an unstable region of the evolutionary state space and be replaced by the low-yielding, long-lived and high-yielding, short-lived species we see today. I propose that, despite this, the controlled conditions of an agricultural system present a novel selective regime which can be studied in its own right and exploited to develop life cycle strategies not possible elsewhere.
Accordingly, we have developed a physiological model of resource allocation within a grain species. The allocation functions themselves are mutable. This permits virtual breeding of modeled grains in order to explore the model's "gene space" and to locate optimal plants for a given set of harvest conditions.
The model is observed to rapidly produce both annual and perennial solutions following a random initialization. Both annuals and perennials may be bred to produce a perennial or annual, respectively. Perennial seed production in the model has been observed to equal or surpass that of annuals under some conditions. Insofar as the model is representative of reality, the implication is that high-yielding perennial grains may be bred in the real world, and that they may offer a competitive alternative to annuals.
I am an Associate Editor for Computers & Geosciences .
Reviews
I have reviewed articles for the following journals:
Computers & Geosciences
Earth Science Informatics
Geoscience and Remote Sensing Letters
Hydrological Processes
International Journal of Digital Earth
International Journal of Geographical Information Science
Journal of Hydrology
Ocean Engineering
waterviz.com
waterviz.com was the result of a 24 hour hack for the 2015 NASA Space Apps Challenge. The site gathered real-time flow information from the National Water Information System's water gauges throughout the United States, compared this against historical flow information, and then displayed the data visually by colouring rivers from the National Hydrography Dataset. The effect is being able to see a real-time or historic glimpse of every river in the United States at any zoom level. Historic hurricane data and information from the National Land Cover Database (NLCD) were added to facilitate interpretation.
Awards:
2015 NASA Space Apps Challenge Virtual Division Winner (from a field of 120)
MyFutureClimate
MyFutureClimate used an ensemble of historic and predicted gridded climate data from global climate models to predict the movement of envelopes of similar climates. This, for instance, allows one to say that "the climate of Minnesota in 20 years will be like the climate of Iowa today".
Awards:
2014 Hack4Good Global Climate Challenge Grand Winner
2014 Hack4Good Global Climate Challenge Visualization Category Winner
omgTransit.com presents a minimalistic interface for a user to explore public transit options near them while providing real-time information about these options. It merges information from a variety of databases and APIs providing a cohesive view on what would otherwise be a disparate dig. The app runs on Rails and Node.js with ElasticSearch, Redis, and PostgreSQL handling data. On the client-side Bootstrap, Backbone, jQuery, UnderScore, and Google Maps v3 work together.
Awards:
2014 Minnesota Cup Entrepreneur Contest 3rd Place, High Tech Division
2014 Knight Green Line Transit Challenge Finalist Link
2014 Beta.MN Start-up Competition Winner
2013 Champions of Change commendation from the White House
2013 ESRI National Day of Civic Hacking Innovation Award
Climate Tracker
Climate Tracker is a simple model for visualizing historic climate data from the United States and Canada as tracks running across the landscape and through time. The tracks Climate Tracker produces can be used to calculate climate velocities, providing a simple means of understanding climate, its changes, and its interactions with the landcsape. It runs on a LAMP stack with C++, Python, and NumPy providing analysis. The client side uses jQuery and OpenLayers.
The Open Harp
TheOpenHarp is a web app for easily viewing and browsing public domain tunebooks and hymnals online.
Individuals suffering from spinal injuries often have difficulty controlling urination. The solution to this is to use a leg bag connected to a catheter. These bags are often inaccessible to their users and must be emptied regularly. Though this can be done by a care assistant, this compromises the independence of the user.
Devices do exist to permit self-emptying, but most of the individuals we spoke with thought of these devices as being unreliable and difficult to operate. The devices are quite expensive and often have long repair times. Additionally, the devices do not sense urine levels, which can lead urine to back-up into a user's kidneys potentially leading to serious health complications.
To address this, we used Cordova to build an app which interfaced via Bluetooth with an Arduino. The Arduino, in turn could be used to control leg bag emptying and sense urine levels. The app could display this information along with a detailed use history, which can help a user time urination as well as provide medically-relevant information.
Myoimpedance Prosthetic Sensors
Role: DAAD Professional Research Intern Fellow Institut für Bioprozess- und Analysenmesstechnik, Heiligenstadt, Germany, 2012
Technologies: Matlab, Electronics
Numerous studies and applications have shown electromyographic signals (EMGs) as useful for controlling prostheses and ortheses. Their great utility stems from the degree of voluntary control a user can wield over these signals, even if limbs are missing, especially in the EMGs of skeletal muscles. Despite the success and utility of EMGs, a related signal---the electrical myoimpedance (EMI)---has been largely ignored. Previously this was due to the low-sensitivity of skin-based measurements and the undesirability of invasive methods, such as needles. But these are problems which can be overcome. In this project, I demonstrated that non-invasive skin-based EMI measurements are possible and that they provide additional information which is not present in the EMG signal.
The EMG signal is a measurement of the voltage drop between two or more points. Since muscle contractions are caused by neuroeletric signals and produce voltage changes, EMG signals occur any time a muscle is contracted. In contrast, the EMI signal is a measurement of a muscle's resistance to an injected current flow at a particular frequency or set of frequencies. Since resistance is directly proportional to the current's path length and inversely proportional to its cross-sectional area, the EMI signal correlates to the geometry of the muscle. Therefore, it is expected to be sensitive to morphologic changes which occur during concentric contractions.
Schematic representation of measurement setup
To test this, I developed a non-invasive procedure for making simultaneous skin-based measurements of EMG and EMI signals during both concentric and isometric contractions. A 1V square-wave signal was passed through a 100kΩ resistor to produce a small injection current for measuring the EMI. This signal, along with the EMG, was sampled using a two-electrode sensor. A Fourier transform was used to separate the EMI and EMG signals. To facilitate the correlation of signal features with muscle actions, a video camera was synchronized with the measurement system.
Example of a measurement: blue is the active (EMG) signal and green is the passive (EMI) signal
Fifty isometric and fifty concentric contractions were recorded, normalized, and co-registered resulting in the aggregated signal views shown below, where red indicates high-densities of measured points and light-blue indicates low-densities. The EMI signal shows stark differences depending on the contraction whereas the EMG signal does not. Low-density regions in both signals are caused by high-amplitude signals, but such regions cannot be used to differentiate between types of contractions using the EMG signal because they are wholly dependent on the length of a contraction, be it concentric or isometric.
Aggregated muscle response signals
Intraglacial Flow Sensors
Role: Research Collaborator, Sensor Payload Designer University of Bristol, 2009
Technologies: MSP430, Python, Electronics
During the summer, meltwater on the surface of glaciers forms pools and rivers. Many of these eventually drain to the base of the ice sheet through moulins. Basal meltwater is believed to play a significant role in the movement of glaciers and the evolution of ice sheets by lubricating the base of the glacier or, in high-pressure situations, actually lifting it.
Diagram of a moulin
Unfortunately, extracting measurements from the extreme environments of subglacial hydrologic systems poses severe logistic and technical challenges for scientists. First, you have to get to the moulin without sliding in, then you have to get your equipment through more than a half-kilometer of ice… and get it back again.
The entrance of a moulin
Dye tracing has been used to limit transit times and dispersion properties of these systems, but it cannot provide information about the temperature, pressure, and distribution of turbulence of basal flows. Tethered sensors provide a partial solution and have yielded pressure information on small valley glaciers. However, there are challenges associated with first passing tethered sensors through thicker ice sheets and, second, finding flow channels of interest amid highly turbulent environments and tortuous flow paths. Additionally, the data gathered by a tethered sensor is limited to a single or limited number of points. Consequently, a device is required that can measure water pressure, temperature, and turbulence at a range of locations near the ice bed without requiring a constant physical connection with the surface.
A prototype e-Tracer
Our solution to this problem was to develop an electronic tracer (E-tracer) capable of travelling through the subglacial drainage system, measuring and recording in situ information as it transits. The device is equipped with a radio direction finding (RDF) transmitter, which allows it to be located once it has emerged from beneath the ice sheet for collection of data stored on the internal, non-volatile memory. This provides a new way of accessing the ice sheet bed and making in situ measurements, which is potentially transferable to other sub-surface environments where access problems prevent the deployment of conventional sensing technologies.
An early E-tracer sensor payload prototype
The E-tracers are composed of a microcontroller, with radiofrequency (RF) beacon, data storage, and sensors contained within a spherical housing. The density of the device is adjusted for neutral bouyancy, meaning that the tracers float just below the water surface. The E-tracers consume 0.9mW average power giving them a 3 month lifespan with one half AA lithium battery. The recovery beacon has a line-of-sight detection range of 3–5 km.
This project represents the first successful deployment and recovery of wireless sensors through the subglacial drainage system beneath the Greenland ice sheet.