AUTOMATED STAND DELINEATION AND FIRE FUELS MAPPING Jennifer Stefanacci, Director of Geospatial Services Parallel, Incorporated USGS Rocky Mountain Geographic Science Center Denver, CO 80225 jlstefanacci@usgs.gov ABSTRACT Wildfires continue to put pressure on planning and mitigation efforts making the ability to map fire fuels and risks increasingly important. This project attempts to map areas on state, private, and other federal lands for this purpose and has focused on the development of advanced digital mapping methodologies to support fire fuels mapping. The fusion of advanced image classification techniques with high-resolution satellite data has proven to provide costeffective and accurate inventories of fire fuels and associated risks in the WUI. Remotely sensed data is relatively inexpensive can be acquired over large areas within the same growing season making it ideal for this purpose. The project looks at whether automated stand delineation and fuels mapping can be performed consistently from a readily available, reasonably priced data source. BACKGROUND As wildfires continue to put pressure on planning and mitigation efforts at federal, state and local levels, the ability to map fire fuels and associated risks at the local level becomes increasingly important. In many areas, the growing population in the Wildland-Urban Interface (WUI) adds to this complexity. The Stand Delineation and Fire Fuels Mapping pilot project began as an attempt to map areas similar to the US Forest Service Common Vegetation Units (CVU) on state, private, and other federal lands. The pilot area, near Evergreen, Colorado, has focused on the development of advanced digital mapping methodologies to support fire fuels mapping. The fusion of advanced image classification techniques with high-resolution satellite data has proven to provide cost-effective and accurate inventories of fire fuels and associated risks in the WUI. Using automated methods, areas similar to the CVUs are made available over large areas controlled by a variety of land owners creating more consistent fuels information for land managers and fire response teams. The fire fuels classification will be used by the USGS, in collaboration with impacted communities, local fire departments, and state and federal officials to conduct a natural hazards risk assessment of the project area. After the hazards risk assessment is complete, the USGS can apply an integrated science approach to examine all potential natural hazards expected to impact the area, and consider the spatial and temporal aspects of a hazard and the potential for interaction among hazards. Specific to the wildland fire hazard: fuel loadings, impacts and effectiveness of fuel treatments, pre-event long-term climatic conditions and rainfall regime, and potential resources at risk can be evaluated. This integrated assessment of hazards allows the USGS to work with fire managers to prioritize fuel treatments, develop criteria for incident response and determine effective post-fire rehabilitation treatments. OBJECTIVES Given the difficulties of managing widely dispersed land or land owned by multiple entities, the USGS is looking for alternative methods of mapping vegetation stands and fire fuels that are cost-effective, cover large geographic areas, and have a short turn-around time. These characteristics will allow the mapping to be repeated on a regular basis in order to monitor the effects of management practices and fuel treatments. With the needs of the state and local land managers in mind, remote sensing seems to meet these requirements. Remotely sensed data is relatively inexpensive compared to field work and can be acquired over large areas within the same growing season. The acquisition can also be targeted to specific times of the year in order to isolate the particular characteristics. Once the classification techniques have been developed, they can be applied to a large area at once, so the turnaround time is much shorter than for field surveys. The goal for this project was to determine whether automated
stand delineation and fuels mapping can be performed consistently from a readily available, reasonably priced data source. STUDY AREA In recent years the Wildland Urban Interface (WUI) along the Front Range of Colorado has seen an large increase in population. While the populations of Denver, Fort Collins, Colorado Springs and other cities has expanded greatly, many people are moving to small towns in the foothills where they can commute to the city for work. The WUI area is characterized by having a variety of land owners including state lands, city and county parks, and private residential areas. The mix of landowners and parcel sizes makes cohesive management of wildfire risks difficult. Figure 1. Pilot study areas in Colorado and Wyoming. In order to define the procedures to automate stand delineation and map fire fuels, a small study area was chose around Evergreen, Colorado. Evergreen is 30 miles west of Denver at an elevation of 7,000 feet. The population of Evergreen and the surrounding communities is about 30,000 with a mix of small towns, low density residential areas, and Ponderosa Pine forest. The variety of land cover, as well as its easy accessibility for field verification, makes this area ideal for testing the classification procedures. Once the Evergreen quadrangle was classified, the methods were tested in four other locations: Klondike Ranch and Beartrap Meadow, Wyoming and Divide and Electric Mountain, Colorado. METHODS Data Sources Both high- and medium resolution satellite data were considered for this project. The high-resolution data acquired was DigitalGlobe QuickBird data with a multi-spectral resolution of 2.7-meters. The medium-resolution data source was LANDSAT ETM, which was pan-sharpened with both the panchromatic band from LANDSAT and from the QuickBird image.
LANDSAT multi-spectral imagery. LANDSAT imagery pan-sharpened to 5-meters. Figure 2. QuickBird multi-spectral imagery. The QuickBird multi-spectral satellite imagery was acquired in July 2003 by DigitalGlobe. The 2.7-meter multispectral bands were used to test the automated stand delineation process. The imagery was found to have too much detail to produce desirable results in the automated process. Once the stand delineation process was complete, however, the resolution of the QuickBird data was suitable for detailed classification and fire fuels mapping. The second data source was LANDSAT imagery from July 2002 that was pan-sharpened with both the 0.7-meter QuickBird panchromatic data and the 15-meter LANDSAT panchromatic band. The LANDSAT image was available through the USGS MRLC program, which lowered the data cost significantly and the large image footprint makes it especially desirable to use in this application. The LANDSAT scene that was pan-sharpened with the QuickBird image to a resolution of 5-meters did not produce vegetation stands that met the requirements of the project. Similar to the QuickBird multi-spectral data, the image produced stand polygons with too much detail. The final dataset to be evaluated was the LANDSAT data that was pan-sharpened with its 15-meter panchromatic band. This data produced the most accurate results. The vegetation stands most closely resembled those of the USFS CVUs and were of a size suitable for making management decisions. In addition to the six multispectral bands from the LANDSAT data, a Normalized Difference Vegetation Index (NDVI) layer was created in order to help isolate the vegetation. Ten-meter elevation data and aspect were also include in the project. Software The pan-sharpened LANDSAT data, vegetation index, and elevation data were input into classification software. The software used was ecognition Professional 4.0 from Definiens Imaging. ecognition was developed to overcome some of the limitations found in traditional, pixel-based classification techniques. Traditional classification often produces undesirable speckling or salt-and-pepper anomalies, rather than cohesive groups of classified pixels. This phenomenon is especially apparent when using high-resolution imagery. ecognition s approach is based on the concept that important semantic information necessary to interpret an image is not represented in single pixels but in meaningful image objects and their mutual relations. (Definiens Imaging, 2004) The software therefore uses tone, shape, texture, area, and context, rather than just spectral information, to create an easily interpreted, object-oriented classification.
Figure 3. Definiens Imaging s ecognition software interface. Procedures The approach taken for the stand delineation was to first classify vegetation types very generally based on small image objects. Then the small areas were aggregated based on their classification to create vegetation stand boundaries that were between five and 20-acres, the size that is suitable for fuels treatment planning and other management practices. The first step was to load the image data into a new ecognition project. The six pan-sharpened multi-spectral layers, vegetation index, elevation, and aspect data were input into the software. ecognition has the ability to handle multiple resolutions of imagery, so the 10-meter elevation model was used in its native resolution and not resampled to match the rest of the data. Next was to generate the image objects that would be used in the general vegetation classification by a process referred to as segmentation. A variety of parameters can be altered to generate image objects that meet a particular need (Flanders, 2003, Manakos, 2000). Image layers can be given more or less weight than the others. The degree to which spectral (color) attributes drive the segmentation can be adjusted and a parameter to define compact versus irregularly shaped areas can be set (Vernier, 2004). A variety of weighting schemes for various layers, object scale parameters, and values for color and compactness of the image objects were tested. Even weighting of the multispectral layers, lower weight for the elevation layers, small scale parameter, high emphasis on color, and equal split between irregular and compact objects provided the best results for isolating the vegetation. Reno, Nevada May 1-5, 2006
Figure 4. LANDSAT image showing vegetation polygon definition. Once generated, the image objects were analyzed to determine the input layers that identified general vegetation types by inspecting each layer using the Feature View and performing a series of Feature Space Optimizations. The Feature View allows the user to visualize the properties of image objects in a graphical way and therefore provides an intuitive access to the peculiarity of a certain feature over all image objects in a scene. (Definiens Imaging, 2004) A class hierarchy was created that defined general vegetation classes including coniferous forest, deciduous forest, grass, impervious areas, water, and clouds (Kressler, 2003, Mansor, 2002). Sample image objects were chosen as training sites and the Feature Space Optimization tool was used to define the image layers to be included in the Nearest Neighbor classifier. The Feature Space Optimization allows the user to specify two or more classes and a set of image layers, then determines which layers statistically define the classes best, based on the samples that have been chosen for each class. The results vary based on the geographic location of the area being classified and the samples that are used. A classification-based segmentation was then performed with a higher scale parameter to generate larger image objects. These objects contained vegetation of only one type, which is not typical of true forest management areas. Therefore, an additional segmentation was run that allowed different vegetation types to be combined into areas that created more meaningful management units. Since fire fuels are dependent on the specific geographic location, this prototype project focused on a general land cover classification using high-resolution data to represent fuels types. This approach can be modified for individual areas, or specific fuels could be mapped based on the local needs. The high-resolution classification used the QuickBird data pan-sharpened to 0.7-meters. The classes were similar to those listed above and a similar procedure was used to derive this classification. Once the vegetation types were classified, statistics were calculated showing the dominant vegetation type for each vegetation stand. Other statistics also showed the percent of the dominant class and the density and diversity of vegetation within each stand.
Figure 5. Percent cover of the dominant class in each vegetation stand. RESULTS/CONCLUSIONS The results of this pilot project show that vegetation stands similar to USFS CVUs can be mapped from remotely sensed data using automated procedures. These results have been verified against the existing field data for the pilot area, but more verification is needed. The pilot area contains a subset of the vegetation and fuels types found in areas affected by wildfire. Other geographic areas, with different fuels types, need to be tested to determine the widespread utility of these procedures. While preliminary findings show that this classification technique accurately identifies vegetation stands appropriate for forest management and can be used to aid land managers in planning and carrying out management programs, further verification would allow for additional refinement of the technique. As fire fuels mitigation and other wildfire control measure take place around the Evergreen area, new data will be acquired and the mapping will be repeated to monitor the effectiveness of the program. Finally, additional study areas will be mapped using similar data sources and additional field data will be gathered so that a comprehensive accuracy assessment can be performed.
Figure 6. Final classification of the pan-sharpened QuickBird data for the Evergreen, Colorado pilot area. REFERENCES Arroyo, L., Healey, S., Cohen, W., Cocero, D., and Manzanera, J. (2005). Regional Fuel Mapping Using an ObjectOriented Classification of QuickBird Imagery. Proceedings of Semana Geomática, February 2005, Barcelona, Spain. Definiens Imaging. (2004). http://www.definiens-imaging.com (10 Jun 2005). Flanders, D., Hall-Beyer, M., and J. Pereverzoff. (2003). Preliminary evaluation of ecognition object-based software for cut block delineation and feature extraction. Can. J. Remote Sensing, 49(4):441-452. Huang, C., Yang, L., Wylie, B., and Homer, C. (2001). A Strategy for Estimating Tree Canopy Density using LANDSAT 7 ETM+ and High Resolution Images Over Large Areas. Proceedings of the Third International Conference on Geospatial Information in Agriculture and Forestry, Denver, Colorado. Kressler. F.P., Kim, Y.S., and K.T. Steinnocher. (2003) Object-oriented land cover classification of panchromatic KOMPSAT-1 and SPOT-5 data. http://www.definiens-imaging.com/documents/publications/i_d01_19.pdf (3 Nov. 2003). Makakos I., Schneider T. and U. Ammer. (2000). A comparison between the ISODATA and the ecognition classification methods on basis of field data. Poster at the XIXth ISPRS Congress, Amsterdam. Mansor, S., Hong, W.T., A.R.M. Shariff. (2002). Object oriented classification for land cover mapping. Proc. Map Asia 2002, http://www.gisdevelopment.net/application/environment/overview/envo0010.htm (3 Apr. 2004). Metenyi, E., Farrand, W.H., Stevens, L.E., Melis, T.S, and K Chhibber. (2000). Studying the potential for monitoring Colorado River ecosystem resources below Glen Canyon Dam using low-altitude AVIRIS data. Proc. 9th AVIRIS Earth Science and applications Workshop, Pasadena. Vernier, M. (2004). Making timber cruising more efficient. Imaging Notes, Winter 2004, pp23-25. Reno, Nevada May 1-5, 2006