In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variance (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular CDP323 sensor in which thresholds were replaced by those from your models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged Mouse monoclonal to ERBB2 vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%C93.3%) and overall (92.0%C93.1%) accuracies. Our results suggest that Method CDP323 of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models. and dominated the emergent, floating-leaf and submerged vegetation, respectively. Because emergent vegetation has the highest transmission intensity and submerged vegetation has the least expensive, areas that consisted of emergent vegetation mixed with other aquatic vegetation types were classified as emergent vegetation, and areas with mixed floating-leaf and submerged vegetation were classified as floating-leaf vegetation. 2.2. Field Surveys We conducted field surveys on 14C15 September 2009 and 27 September 2010. In 2009 2009, a total of 426 training or validation samples were obtained from: (a) 208 plots located along a transect from your east to the south of the lake; (b) 137 plots from 26 lake locations distributed nearly uniformly across the lake ; and (c) 48 plots of reed vegetation and 33 CDP323 plots of terrestrial land cover (e.g., shoreline roads and buildings such as docks, businesses and factories) selected from a 1:50,000 land use and land cover map. Similarly, a total of 539 field samples were obtained in 2010 2010, including 438 photographs taken along a transect from your east to the southeast of the lake and 101 plots from your 1:50,000 land use and land cover map. The field survey has been explained in detail by Zhao . 2.3. Image Processing Because they contain dynamic information concerning aquatic vegetation and related environmental factors, multi-seasonal images have the potential to provide higher classification accuracy than a single image [16,38]. Therefore, in this study we used a combination of two images for aquatic vegetation identification, one from winter and one from summer time. A total of six image pairs were used: (1) ETM+ images dated 26 March and 17 August 2009 (SLC-off images downloaded from http://earthexplorer.usgs.gov/); (2) TM images dated 13 January and 10 September 2009; (3) AVNIR-2 images from ALOS dated 30 December 2008 and 17 August 2009; (4) CCD images from HJ-1B dated 15 March and 10 September 2009; (5) ETM+ images dated 13 March and 21 September 2010; and (6) CCD images from HJ-1B dated 10 March and 21 September 2010. Of these image pairs, the four from 2009 (including the AVNIR-2 image dated 30 December 2008 because no high quality AVNIR-2 image could be obtained from the winter of 2009) were used to compare different normalization methods, while the other two pairs were used to validate the robustness of our recommended normalization method. The band wavelength ranges and resolutions of the images used in this study.