Property:Extended data description

From CSDMS

This is a property of type Text.

Showing 38 pages using this property.
H
The HydroLAKES database was designed as a digital map repository to include all lakes with a surface area of at least 10 ha. Version 1 comprises the shoreline polygons of 1,427,688 individual lakes. HydroLAKES aims to be as comprehensive and consistent as possible at a global scale and contains both freshwater and saline lakes, including the Caspian Sea, as well as human-made reservoirs and regulated lakes. The HydroLAKES database was created by compiling, correcting, and unifying several near-global and regional datasets, foremost the SRTM Water Body Data (SWBD; Slater et al., 2006) for regions from 56˚S to 60˚N, and CanVec (Natural Resources Canada, 2013) for most North American lakes. Map generalization methods were applied and some polygon outlines were smoothed during the mapping process to ensure spatial consistency of the data. The resulting map scale is estimated to be between 1:100,000 and 1:250,000 for most lakes globally, with some coarser ones at 1:1 million.  +
I
The ICE-5G (VM2) model mathematically analyses glacio-isostatic adjustment processes and provides model data on global ice sheet coverage, ice thickness and paleotopography at 10 min spatial resolution for 21ka and 0ka, and at 1degree spatial resolution for intervals in between these snapshots. These are NETCDF files.  +
N
The Indian National River Linking Project (NRLP), or Indian Rivers Interlinking project (IRI), proposes a major redistribution of water resources over the Indian subcontinent. A large engineering effort is proposed to redistribute monsoonal water from the Himalayas and foothills, store water in reservoirs, and route it via canals ("links") to the drier regions of Southern India. A total of 29 links and 43 dams and barrages are proposed as part of the project. The plan would provide water resources for agriculture, drinking water and industrial use to a growing population in central and southern India, while potentially improving flood control in the northern and mountainous areas. The project would also result in a major reorganization of watersheds, with possible impacts on ecosystems and the environment. There would be impacts on trans-boundary rivers. Here provided are two databases: (1) the dams database, with locations, operating specifications, sources, and notes on population expected to be displaced; and (2) the canals database, with locations, operating specifications, and further notes. The databases are available as shapefiles for GIS visualization - click a feature to see its database information. A "rivers" shapefile is also available for help in generating visualizations. Note that the rivers are not currently labeled in the shapefile. Raw txt/csv format is also available for the canals and dams databases. An annotated reference list is included to give specifics on the sources from which each number was obtained and/or calculated. The databases are constructed from hundreds of government reports, geo-referenced maps, planning and design documents and Open Street Map data. For full methodology and calculations regarding displaced populations, see the accompanying manuscript: Higgins et al., 2017. For the graph database tool used to calculate basin connectivity changes and water discharge changes for given river mouths, see the github page: https://github.com/sahiggin/NRLP .  
M
The MERIT DEM was developed by removing multiple error components (absolute bias, stripe noise, speckle noise, and tree height bias) from the existing spaceborne DEMs (SRTM3 and AW3D). It represents the terrain elevations at a 3sec resolution (~90m at the equator), and covers land areas between 90N-60S. The data is freely available for research and education purpose.  +
N
The Multi-Resolution Land Characteristics Consortium (MRLC) has completed the National Land Cover Database (NLCD) 2001 products for the conterminous United States, Hawaii, Alaska and Puerto Rico at 30 m cell resolution. The NLCD 2001 products (land cover, impervious surface and canopy density) were generated from a standardized set of data layers mosaicked by mapping zone. Typical zonal layers included multi-season Landsat 5 and Landsat 7 imagery centered on a nominal collection year of 2001, and Digital Elevation Model based derivatives at 30 meters spatial resolution. NLCD 2001 used an improved classification algorithm from NLCD 1992, resulting in a more precise rendering of spatial boundaries between 16 classes of land cover (additional classes are available in coastal areas and Alaska only).  +
The National Elevation Dataset (NED) is the primary elevation data product of the USGS. The NED is a seamless dataset with the best available raster elevation data of the conterminous United States, Alaska, Hawaii, and territorial islands. The NED is updated on a nominal two month cycle to integrate newly available, improved elevation source data. All NED data are public domain. The NED is derived from diverse source data that are processed to a common coordinate system and unit of vertical measure. NED data are distributed in geographic coordinates in units of decimal degrees, and in conformance with the North American Datum of 1983 (NAD 83). All elevation values are in meters and, over the conterminous United States, are referenced to the North American Vertical Datum of 1988 (NAVD 88). The vertical reference will vary in other areas. NED data are available nationally (except for Alaska) at resolutions of 1 arc-second (about 30 meters) and 1/3 arc-second (about 10 meters), and in limited areas at 1/9 arc-second (about 3 meters). In most of Alaska, only lower resolution source data are available. As a result, most NED data for Alaska are at 2-arc-second (about 60 meters) grid spacing. Part of Alaska is available at the 1- and 1/3-arc-second resolution, and plans are in development for a significant improvement in elevation data coverage of the state.  +
The National Hydrography Dataset (NHD) is the surface-water component of The National Map. The NHD is a comprehensive set of digital spatial data that represents the surface water of the United States using common features such as lakes, ponds, streams, rivers, canals, streamgages, and dams. Polygons are used to represent area features such as lakes, ponds, and rivers; lines are used to represent linear features such as streams and smaller rivers; and points are used to represent point features such as streamgages and dams. Lines also are used to show the water flow through area features such as the flow of water through a lake. The combination of lines is used to create a network of water and transported material flow to allow users of the data to trace movement in downstream and upstream directions.  +
The National Ocean Service (NOS) Hydrographic Data Base (NOSHDB), maintained by NGDC in conjunction with NOS, provides extensive survey coverage of the coastal waters and Exclusive Economic Zone (EEZ) of the United States and its territories. The NOSHDB contains data digitized from smooth sheets of hydrographic surveys completed between 1851 and 1965, and from survey data acquired digitally on NOS survey vessels since 1965. Over 76 million soundings from over 6600 surveys are now included in the NOSHDB. These data may be searched and downloaded online using the Hydrographic Survey Data Map Service (an interactive map and data discovery tool at NGDC; http://map.ngdc.noaa.gov/website/mgg/nos_hydro/viewer.htm). The NOSHDB data with search and retrieval software are also available on a DVD-ROM or CD-ROM set. Data products from NOS surveys, including BAG files, descriptive reports (DRs), smooth sheet images, survey data images, textual gridded data, and sidescan sonar mosaics, are available for download from NGDC using the Hydrographic Survey Data Map Service.  +
S
The Natural Resources Conservation Service (NRCS) - National Cartography and Geospatial Center (NCGC) previously archived and distributed the State Soil Geographic (STATSGO) Database. The STATSGO spatial and tabular data were revised and updated in 2006. STATSGO has been renamed to the U.S. General Soil Map (STATSGO2). It is available for download from the Soil Data Mart (http://soildatamart.nrcs.usda.gov/). The dataset was created by generalizing more detailed soil survey maps. Where more detailed soil survey maps were not available, data on geology, topography, vegetation, and climate were assembled, together with Land Remote Sensing Satellite (LANDSAT) images. Soils of like areas were studied, and the probable classification and extent of the soils were determined. Map unit composition was determined by transecting or sampling areas on the more detailed maps and expanding the data statistically to characterize the whole map unit. This dataset consists of geo-referenced vector and tabular digital data. The map data were collected in 1- by 2-degree topographic quadrangle units and merged into a seamless national dataset. It is distributed in state/territory and national extents. The soil map units are linked to attributes in the tabular data, which give the proportionate extent of the component soils and their properties. The tabular data contain estimated data on the physical and chemical soil properties, soil interpretations, and static and dynamic metadata. Most tabular data exist in the database as a range of soil properties, depicting the range for the geographic extent of the map unit. In addition to low and high values for most data, a representative value is also included for these soil properties.  +
P
The PSMSL was established in 1933, and is the global data bank for long term sea level change information from tide gauges. The PSMSL collect data from several hundred gauges situated all over the globe. As of December 2006, the database of the PSMSL contains over 55000 station-years of monthly and annual mean values of sea level from almost 2000 tide gauge stations around the world received from almost 200 national authorities. On average, approximately 2000 station-years of data are entered into the database each year.  +
S
The SRTM Water Body Data files are a by-product of the data editing performed by the National Geospatial-Intelligence Agency (NGA) to produce the finished SRTM Digital Terrain Elevation Data Level 2 (DTED® 2). In accordance with the DTED® 2 specification, the terrain elevation data have been edited to portray water bodies that meet minimum capture criteria. Ocean, lake and river shorelines were identified and delineated. Lake elevations were set to a constant value. Ocean elevations were set to zero. Rivers were stepped down monotonically to maintain proper flow. After this processing was done, the shorelines from the one arc second (approx. 30-meter) DTED® 2 were saved as vectors in ESRI 3-D Shapefile format. In most cases, two orthorectified image mosaics (one for ascending passes and one for descending passes) at a one arc second resolution were available for identifying water bodies and delineating shorelines in each 1 x1 cell. These were used as the primary source for water body editing. The guiding principle for this editing was that water must be depicted as it was in February 2000 at the time of the shuttle flight. A Landcover water layer and medium-scale maps and charts were used as supplemental data sources, generally as supporting evidence for water identified in the image mosaics. Since the Landcover water layer was derived mostly from Landsat 5 data collected a decade earlier than the Shuttle mission and the map sources had similar currency problems, there were significant seasonal and temporal differences between the depiction of water in the ancillary sources and the actual extent of water bodies in February 2000 in many instances. In rare cases, where the SRTM image mosaics were missing or unusable, Landcover was used to delineate the water in the SRTM cells. The DTED® header records for those cells are documented accordingly.  +
The SSURGO database contains information about soil as collected by the National Cooperative Soil Survey over the course of a century. The information can be displayed in tables or as maps and is available for most areas in the United States and the Territories, Commonwealths, and Island Nations served by the USDA-NRCS. The information was gathered by walking over the land and observing the soil. Many soil samples were analyzed in laboratories. The maps outline areas called map units. The map units describe soils and other components that have unique properties, interpretations, and productivity. The information was collected at scales ranging from 1:12,000 to 1:63,360. More details were gathered at a scale of 1:12,000 than at a scale of 1:63,360. The mapping is intended for natural resource planning and management by landowners, townships, and counties. Some knowledge of soils data and map scale is necessary to avoid misunderstandings.  +
The Shuttle Radar Topography Mission (SRTM) obtained elevation data on a near-global scale to generate the most complete high-resolution digital topographic database of Earth between 56 degrees south and 60 degrees north latitude. SRTM consisted of a specially modified radar system that flew onboard the Space Shuttle Endeavour during an 11-day mission in February of 2000. NASA has released version 2 of the Shuttle Radar Topography Mission digital topographic data (also known as the "finished" version). Version 2 is the result of a substantial editing effort by the National Geospatial Intelligence Agency and exhibits well-defined water bodies and coastlines and the absence of spikes and wells (single pixel errors), although some areas of missing data ('voids') are still present. The Version 2 directory also contains the vector coastline mask derived by NGA during the editing, called the SRTM Water Body Data (SWBD), in ESRI Shapefile format. Version 2.1 is a recalculation of the SRTM3 (nominal 90 meter sample spacing) version made by 3x3 averaging of the full resolution edited data. Version 2 had been generated by masking in edited samples from the lower-resolution publicly released by the NGA, and contained occasional artifacts, and in particular a slight vertical “banding” in databeyond 50° latitude. These have been eliminated in Version 2.1 SRTM data are distributed in two levels: SRTM1 (for the U.S. and its territories and possessions) with data sampled at one arc-second intervals in latitude and longitude, and SRTM3 (for the world) sampled at three arc-seconds. Three arc-second data are generated by three by three averaging of the one arc-second samples.  +
The Southern Alaska Coastal Relief Model is a 24 arc-second digital elevation model ranging from 170° to 230° and 48.5° to 66.5° N. It integrates bathymetry and topography to represent Earth's surface and spans over the Gulf of Alaska, Bering Sea, Aleutian Islands, and Alaska's largest communities: Anchorage, Fairbanks, and Juneau. The relief model was built from a variety of source datasets acquired from the National Geophysical Data Center, National Ocean Service, United States Geological Survey, National Aeronautics and Space Administration, and other U.S. and international agencies. The CRM provides a framework to enable scientists to model tsunami propagation and ocean circulation. In addition, it may be useful for benthic habitat research, weather forecasting, and environmental stewardship.  +
R
The U.S. Geological Survey Real-Time Permafrost and Climate Monitoring Network in Arctic Alaska is a collaborative effort with BLM, U.S. Fish and Wildlife Service, private organizations and universities, all managed by USGS. The network was established to provide high quality real-time environmental data to aid in land management decision making. This real-time network is a subset of a larger U.S. Geological Survey permafrost and climate monitoring research network. Many of the stations are co-located with deep boreholes, thus forming the basis for comprehensive permafrost monitoring observatories. The objectives of the larger network include climate change detection, monitoring how permafrost and vegetation respond to climate change, and acquiring improved data for current permafrost characterization and impact assessment models.  +
N
The United States Geological Survey (USGS) has collected water-resources data at approximately 1.5 million sites across the United States, Puerto Rico, and Guam. The types of data collected are varied, but generally fit into the broad categories of surface water and ground water. Surface-water data, such as gage height (stage) and streamflow (discharge), are collected at major rivers, lakes, and reservoirs. Ground-water data, such as water level, are collected at wells and springs. Water-quality data are available for both surface water and ground water. Examples of water-quality data collected are temperature, specific conductance, pH, nutrients, pesticides, and volatile organic compounds. This web site serves current and historical data. Data are retrieved by category of data, such as surface water, ground water, or water quality, and by geographic area. Subsequent pages allow further refinement by selecting specific information and by defining the output desired. Real-time data typically are recorded at 15-60 minute intervals, stored onsite, and then transmitted to USGS offices every 1 to 4 hours, depending on the data relay technique used. Recording and transmission times may be more frequent during critical events. Data from real-time sites are relayed to USGS offices via satellite, telephone, and/or radio and are available for viewing within minutes of arrival. (Note that all real-time data are provisional and subject to revision).  +
W
The World Glacier Inventory contains information for over 100,000 glaciers through out the world. Parameters within the inventory include geographic location, area, length, orientation, elevation,and classification of morphological type and moraines. The inventory entries are based upon a single observation in time and can be viewed as a 'snapshot' of the glacier at this time. The core of this collection is data from the World Glacier Monitoring Service, Zurich. The development of the data product was funded through NOAA's Environmental Services Data and Information Management (ESDIM) program.  +
The World Ocean Atlas 2001 (WOA01) contains ASCII data of statistics and objectively analyzed fields for one-degree and five-degree squares generated from World Ocean Database 2001 observed and standard level flagged data. The ocean variables included in the atlas are: in-situ temperature, salinity, dissolved oxygen, apparent oxygen utilization, percent oxygen saturation, dissolved inorganic nutrients (phosphate, nitrate, and silicate), chlorophyll at standard depth levels, and plankton biomass sampled from 0 - 200 meters.  +
The World Vector Shoreline (WVS) is a digital data file at a nominal scale of 1:250000, containing the shorelines, international boundaries and country names of the world. The World Vector Shoreline is a standard US Defense Mapping Agency (DMA) product that has been designed for use in many applications. The WVS is divided into ten ocean basin area files. Together the ten files form a seamless world, with the exception of Central America, where there is an overlap between the Western North Atlantic file and the Eastern North Pacific File. The main source material for the WVS was the DMA's Digital Landmass Blanking (DLMB) data which was derived primarily from the Joint Operations Graphics and coastal nautical charts produced by DMA. The DLMB data consists of a land/water flag file on a 3 by 3 arc-second interval grid. This raster data set was converted into vector form to create the WVS. For areas of the world not covered by the DLMB data (e.g. the Arctic and Antarctic), the shoreline was taken from the best available hard copy sources at a preferred scale of 1:250000. The WVS data are stored in chain-node format, and include tags to indicate the landside/waterside of the shoreline.  +
G
The basic concept adopted to develop this database is to integrate the best land cover data available, from local to global, into one single database using international standards; this task requires the harmonization among different layers and legends to create a consistent product. Here are criteria and steps for the harmonization: * absorb, overcome and minimize the thematic and spatio-temporal differences between individual databases; * create an efficient and practical mechanism to harmonize various datasets using the land cover elements; * use data fusion techniques to overcome some of the harmonization issues; identify agreement/disagreement between a limited number of global dataset at pixel level; * create land cover database; * validate land cover database; * develop a fully automated “procedure” to update the database when new datasets may become available.  +
The fully-standardised 26 geomorphometric variables consist of layers that describe the (i) rate of change across the elevation gradient, using first and second derivatives, (ii) ruggedness, and (iii) geomorphological forms. The Geomorpho90m variables are available at 3 (~90 m) and 7.5 arc-second (~250 m) resolutions under the WGS84 geodetic datum, and 100 m spatial resolution under the Equi7 projection. They are useful for modelling applications in fields such as geomorphology, geology, hydrology, ecology and biogeography.  +
M
The global Mixed Layer Depth (MLD) Climatologies available here are computed from more than 5 million individual profiles obtained from the National Oceanographic Data Center (NODC), from the World Ocean Circulation Experiment (WOCE) database, and from the ARGO program. Those are all the high vertical resolution data available since 1941 until 2008, including mechanical bathythermograph (MBT), expendable bathythermograph (XBT), conductivity-temperature-depth probes (CTD), and profiling floats (PFL). The MLDs are estimated directly on individual profiles with data at observed levels. The MLD is defined through the threshold method with a finite difference criterion from a near-surface reference value. A linear interpolation between levels is then used to estimate the exact depth at wich the difference criterion is reached. The reference depth is set at 10 m to avoid a large part of the strong diurnal cycle in the top few meters of the ocean. The optimal temperature criterion is found to be 0.2 °C absolute difference from surface. The optimal one in density is 0.03 kg/m3 difference from surface. Reduction of the data is done on a regular 2° by 2° grid for every month, by taking the median of all MLDs in each grid mesh. A slight smoothing is then applied to take account of the noisy nature of ship observations. The last step consisted into an optimal prediction of the missing data using ordinary kriging method. This interpolation was limited to a 1000 km radius disk containing at least 5 grid point values, leaving regions without values instead of filled by a doubtful interpolation. The advantage of kriging is that it is an exact interpolator, and an estimation error in the form of the kriging standard deviation, an analogy to the statistical standard deviation, is also provided.  +
D
The global drainage direction map DDM30 is a raster map which describes the drainage directions of surface water with a spatial resolution of 30’ longitude by 30’ latitude. 66896 individual grid cells, covering the entire land surface of the globe (without Antarctica), are connected to each other by their respective drainage direction and are thus organized into drainage basins. Each cell can drain only into one of the eight neighboring cells. DDM30 is based on # the digital drainage direction map with a resolution of 5’ of Graham et al. (1999) for South America, Australia, Asia and Greenland, # the HYDRO1k digital drainage direction map (as flow accumulation map) with a resolution of 1 km (USGS, 1999) for North America, Europe, Africa and Oceania (without Australia). Both given base maps were up scaled to a resolution of 30’.  +
O
The mission of the OpenTopography Facility is to: -Democratize online access to high-resolution (meter to sub-meter scale), Earth science-oriented, topography data acquired with LiDAR and other technologies. -Harness cutting edge cyberinfrastructure to provide Web service-based data access, processing, and analysis capabilities that are scalable, extensible, and innovative. -Promote discovery of data and software tools through community populated metadata catalogs. -Partner with public domain data holders to leverage OpenTopography infrastructure for data discovery, hosting and processing. -Provide professional training and expert guidance in data management, processing, and analysis. -Foster interaction and knowledge exchange in the Earth science LiDAR user community. The OpenTopography Facility is based at the San Diego Supercomputer Center at the University of California, San Diego and is operated in collaboration with colleagues at UNAVCO and in the School of Earth and Space Exploration at Arizona State University. Core operational support for OpenTopography comes from the National Science Foundation Earth Sciences: Instrumentation and Facilities Program (EAR/IF) and the Office of Cyberinfrastructure. In addition, we receive funding from the NSF and NASA to support various OpenTopography related research and development activities. OpenTopography was initially developed as a proof of concept cyberinfrastructure in the Earth sciences project as part of the NSF Information and Technology Research (ITR) program-funded Geoscience Network (GEON) project.  +
3
The overall objective of the project is to combine radar and lidar remote sensing to characterize the forested landscapes in 3D. The science products generated by Simard and collaborators have four main components: 1 Global scale mapping of canopy height and biomass at 1km spatial resolution. 2 Improving Shuttle Radar Topography Mission (SRTM) elevation dataset using ICESat's Geoscience Laser Altimeter System (GLAS). 3 High spatial resolution mapping of canopy height and biomass using polarimeteric synthetic aperture radar interferometry (polinSAR) and LiDAR. 4 Mapping of mangrove forests canopy height, biomass, productivity and assessment of vulnerability to anthropogenic activity and sea level change.  +
R
This comprehensive river discharge database covers the entire pan-Arctic drainage system. The collection comprises data from 9138 gauges and contains monthly river discharge data extending from the 1890s (for four Canadian and five Russian gauges) through the early 1990s, but the majority of data was collected between 1960 and 2001. The pan-Arctic drainage region covers a land area of approximately 21 million km2 and drains into the Arctic Ocean as well as Hudson Bay, James Bay, and the Northern Bering Strait. The collection also includes the Yukon and Anadyr River basins. Most of the drainage basins in the database are greater than 15,000 km2; however, the collection includes all available gauge data from Canada and Russia. Data from gauges measuring large drainage areas are of greatest interest to the regional, continental, and global-scale scientific community for modeling purposes. Individual station data are accessible through a graphical interface, or as tab-delimited ASCII text. Tab-delimited ASCII data are also compiled by hydrological region and as a single file for the complete data set.  +
N
This data portal, operated by the National Oceanic and Atmospheric Administration (NOAA) and the National Climatic Data Center (NCDC), contains archives of weather radar data (Doppler radar: NEXRAD), satellite coverage, and additional surface and marine data. These additional surface and marine data comprise historical forecasts and analyses, as well as the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) that covers three centuries of global ocean-atmosphere data, including 2x2 and (since 1960) 1x1 degree gridded data sets. A tutorial for retrieving radar data is here: http://www.ncdc.noaa.gov/oa/radar/nxhastutorial.html  +
S
This data set is generated from brightness temperature data derived from Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) and Defense Meteorological Satellite Program (DMSP) -F8, -F11 and -F13 Special Sensor Microwave/Imager (SSM/I) radiances at a grid cell size of 25 x 25 km. The data are provided in the polar stereographic projection. This product is designed to provide a consistent time series of sea ice concentrations (the fraction, or percentage, of ocean area covered by sea ice) spanning the coverage of several passive microwave instruments. To aid in this goal, sea ice algorithm coefficients are changed to reduce differences in sea ice extent and area as estimated using the SMMR and SSM/I sensors. The data are generated using the NASA Team algorithm developed by the Oceans and Ice Branch, Laboratory for Hydrospheric Processes at NASA Goddard Space Flight Center (GSFC). These data include gridded daily (every other day for SMMR data) and monthly averaged sea ice concentrations for both the north and south polar regions. Two types of data are provided: final data and preliminary data. Final data are produced at GSFC about once per year, with roughly a one-year latency, and include data since 26 October 1978. Final data are produced from SMMR brightness temperature data processed at NASA GSFC and SSM/I brightness temperature data processed at the National Snow and Ice Data Center (NSIDC). Preliminary data are produced at NSIDC approximately every three months (quarterly), using SSM/I data acquired from Remote Sensing Systems, Inc. (RSS), and include roughly the most recent three to twelve months of processed data. Data are scaled and stored as one-byte integers in flat binary arrays.  +
C
This data set is related to the degradational sand-gravel laoratory experiment described in http://dx.doi.org/10.1002/2016WR018938. It was created by Clara Orru, Delft University of Technology.  +
G
This data set provides acoustic measurements of the bottom boundary layer in a tidal current over flat muddy-sand bed. Observations made with Nortek Vectrino Profiler over the lowest 2.5 cm of the water column with a 1 mm resolution. Observations were made in June of 2011 just after slack tide during a flood tidal flow. Please contact Diane Foster of the University of New Hampshire with any questions (diane.foster@unh.edu). References: Wengrove, M. E., & Foster, D. L. (2014). Field evidence of the viscous sublayer in a tidally forced developing boundary layer. Geophysical Research Letters, 41(14), 5084-5090. Wengrove, M. E., Foster, D. L., Kalnejais, L. H., Percuoco, V., & Lippmann, T. C. (2015). Field and laboratory observations of bed stress and associated nutrient release in a tidal estuary. Estuarine, Coastal and Shelf Science, 161, 11-24.  +
P
This dataset consists of borehole temperature measurements acquired in permafrost regions of arctic Alaska between 1950 and 1988 by the U.S. Geological Survey. A large number of the 87 sites (boreholes) represented in this dataset are deep enough to penetrate the base of permafrost.  +
N
This dataset includes observations of water level, water temperature, and wave field collected in 2009 and 2010 near Drew Point, AK by investigators Anderson, Overeem, and Wobus, with help from Adam LeWinter. This data was used in a publication published in 2014 by Barnhart et al. Please see the file README.txt that accompanies the data files. Barnhart, K. R., R. S. Anderson, I. Overeem, C. Wobus, G. D. Clow, and F. E. Urban (2014), Modeling erosion of ice-rich permafrost bluffs along the Alaskan Beaufort Sea coast, Journal of Geophysical Research Earth Surface, 119, doi:10.1002/2013JF002845. Five files are included in this dataset: 1. README.txt (this document) 2. TidbitWaterTemperature_2009.txt 3. WaveLogger2_2009.txt 4. WaveLoggerTemperature_2009 5. Levelogger_2010.txt 6. WaveLogger_2010.txt  +
L
This dataset links human land use and land cover types from the Land-Use Harmonization (LUH2) dataset (Lawrence et al., 2016) to four hydrologic soil groups from 850 to 2015 derived from the SoilGrids250m soils dataset (Hengl et al., 2017). These groups represent sandy soils (hydrologic group A) consisting of texture classes sand, sandy-loam and loamy sand; silty soils (hydrologic group B) consisting of loam, silty-loam and silt; a mixed sand-silt-clay soils (hydrologic group C); and clayey soils (hydrologic group D) represented by clay, sandy-clay, clay-loam, silty-clay, and silt-clay-loam texture classes from the SoilGrids250m dataset. This dataset makes it possible to better link LULCs to soil types typically used for these activities potentially improving the simulation of water, energy and biogeochemical processes in Earth System Models. Additionally, it lays the foundation for simulating LULC impacts on soils that have different vulnerabilities and responses to human uses of soils.  +
O
This project is developing a processing system and data center to provide operational ocean surface velocity fields from satellite altimeter and vector wind data. The regional focus will be the tropical Pacific, where we will demonstrate the value for a variety of users, specifically fisheries management and recruitment, monitoring debris drift, larvae drift, oil spills, fronts and eddies, as well as on-going large scale ENSO monitoring, diagnostics and prediction. We will encourage additional uses in search and rescue, naval and maritime operations. The data will be subjected to extensive validation and error analysis, and applied to various ocean, climate and dynamic basic research problems. The user base derives from the NOAA CoastWatch and climate prediction programs, the broad research community, the Navy's operational ocean analysis program, and other civilian uses. The end product is to leave in place a turnkey system running at NOAA/NESDIS, with an established user clientele and easy internet data access. The method to derive surface currents with satellite altimeter and scatterometer data is the outcome of several years NASA sponsored research. The proposed project will transition that capability to operational oceanographic applications. The end product will be velocity maps updated daily, with a goal for eventual 2-day maximum delay from time of satellite measurement. Grid resolution will be 100 km for the basin scale, and finer resolution in the vicinity of the Pacific Islands. The team consists of private non-profit, educational and government partners with broad experience and familiarity with the data, and the scientific and technical issues. Two Partners are the original developers of the surface current derivation techniques, and two are closely tied to satellite data sources and primary processing centers. Others represent NOAA/NESDIS, Climate Prediction Center, CoastWatch, NMFS and the Navy to evaluate uses and applications.  
W
WAVEWATCH III™ (Tolman 1997, 1999a, 2009) is a third generation wave model developed at NOAA/NCEP in the spirit of the WAM model (WAMDIG 1988, Komen et al. 1994). It is a further development of the model WAVEWATCH, as developed at Delft University of Technology (Tolman 1989, 1991a) and WAVEWATCH II, developed at NASA, Goddard Space Flight Center (e.g., Tolman 1992). WAVEWATCH III™, however, differs from its predecessors in many important points such as the governing equations, the model structure, the numerical methods and the physical parameterizations. Furthermore, with model version 3.14, WAVEWATCH III™ is evolving from a wave model into a wave modeling framework, which allows for easy development of additional physical and numerical approaches to wave modeling. WAVEWATCH III™ solves the random phase spectral action density balance equation for wavenumber-direction spectra. The implicit assumption of this equation is that properties of medium (water depth and current) as well as the wave field itself vary on time and space scales that are much larger than the variation scales of a single wave. With version 3.14 some source term options for extremely shallow water (surf zone) have been included, as well as wetting and drying of grid points. Whereas the surf-zone physics implemented so far are still fairly rudimentary, it does imply that the wave model can now be applied to arbitrary shallow water.  +
A
We present data for use with ML, dealing with the sediment/rock substrates of the NE USA Continental Margin. The pointwise Labelled Data from seabed observations should be spatially extended over the entire area in an intelligent way. To aid that, environmental Feature Layers can be employed to train any chosen Machine Learning method. The trained ML model is then extended across all the vacant areas. The result predicts what the seabed is made of, so that survey operations (including research) can be planned, or biogeochemical budgets can be calculated. The idea of the Challenge Dataset is to permit people - researchers and students - to experiment with their own Machine Learning algorithms and data-preparation adjustments to achieve the BEST POSSIBLE mapping over the area. Metrics on the uncertainties should also be computed. For the time being the mappings are in terms of mud/sand/gravel, rock exposure, and carbonate and organic carbon contents. See the Powerpoint file in the Zipfile for further instructions.  +
W
World Ocean Atlas 2009 (WOA09) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen Utilization (AOU), percent oxygen saturation, phosphate, silicate, and nitrate at standard depth levels for annual, seasonal, and monthly compositing periods for the World Ocean. It also includes associated statistical fields of observed oceanographic profile data interpolated to standard depth levels on both 1° and 5° grids .  +
U
World Population Prospects: The 2008 Revision Population Database from the United Nations Population Division. The preparation of each new revision of the official population estimates and projections of the United Nations involves two distinct processes: (a) the incorporation of all new and relevant information regarding the past demographic dynamics of the population of each country or area of the world; and (b) the formulation of detailed assumptions about the future paths of fertility, mortality and international migration. The data sources used and the methods applied in revising past estimates of demographic indicators (i.e., those referring to 1950-2010) are presented online and in volume III of World Population Prospects: The 2008 Revision (forthcoming). The future population of each country is projected starting with an estimated population for 1 July 2010. Because population data are not necessarily available for that date, the 2010 estimate is derived from the most recent population data available for each country, obtained usually from a population census or a population register, projected to 2010 using all available data on fertility, mortality and international migration trends between the reference date of the population data available and 1 July 2010. In cases where data on the components of population change relative to the past 5 or 10 years are not available, estimated demographic trends are projections based on the most recent available data. Population data from all sources are evaluated for completeness, accuracy and consistency, and adjusted as necessary. To project the population until 2050, the United Nations Population Division uses assumptions regarding future trends in fertility, mortality and international migration. Because future trends cannot be known with certainty, a number of projection variants are produced. The following paragraphs summarize the main assumptions underlying the derivation of demographic indicators for the period starting in 2010 and ending in 2050. A more detailed description of the different assumptions will be available in volume III of World Population Prospects: The 2008 Revision (forthcoming) The 2008 Revision includes eight projection variants. The eight variants are: low; medium; high; constant-fertility; instant-replacement-fertility; constant-mortality; no change (constant-fertility and constant-mortality); and zero-migration. The World Population Prospects Highlights focuses on the medium variant of the 2008 Revision, and results from the first four variants are available on-line.