Next Article in Journal
The Effectiveness of Using a Pretrained Deep Learning Neural Networks for Object Classification in Underwater Video
Previous Article in Journal
Classification of Australian Waterbodies across a Wide Range of Optical Water Types
Previous Article in Special Issue
Spatio-Temporal Analysis of Oil Spill Impact and Recovery Pattern of Coastal Vegetation and Wetland Using Multispectral Satellite Landsat 8-OLI Imagery and Machine Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Urban Tree Cover Changes Using Object-Based Convolution Neural Network (OB-CNN)

by
Shirisa Timilsina
1,
Jagannath Aryal
1,2,* and
Jamie B. Kirkpatrick
1
1
School of Technology, Environments and Design, Discipline of Geography and Spatial Sciences, University of Tasmania, Hobart, Tasmania 7001, Australia
2
Melbourne School of Engineering, University of Melbourne, Parkville, Victoria 3010, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 3017; https://doi.org/10.3390/rs12183017
Submission received: 7 August 2020 / Revised: 11 September 2020 / Accepted: 14 September 2020 / Published: 16 September 2020

Abstract

:
Urban trees provide social, economic, environmental and ecosystem services benefits that improve the liveability of cities and contribute to individual and community wellbeing. There is thus a need for effective mapping, monitoring and maintenance of urban trees. Remote sensing technologies can effectively map and monitor urban tree coverage and changes over time as an efficient and low-cost alternative to field-based measurements, which are time consuming and costly. Automatic extraction of urban land cover features with high accuracy is a challenging task, and it demands object based artificial intelligence workflows for efficiency and thematic accuracy. The aim of this research is to effectively map urban tree cover changes and model the relationship of such changes with socioeconomic variables. The object-based convolutional neural network (CNN) method is illustrated by mapping urban tree cover changes between 2005 and 2015/16 using satellite, Google Earth imageries and Light Detection and Ranging (LiDAR) datasets. The training sample for CNN model was generated by Object Based Image Analysis (OBIA) using thresholds in a Canopy Height Model (CHM) and the Normalised Difference Vegetation Index (NDVI). The tree heatmap produced from the CNN model was further refined using OBIA. Tree cover loss, gain and persistence was extracted, and multiple regression analysis was applied to model the relationship with socioeconomic variables. The overall accuracy and kappa coefficient of tree cover extraction was 96% and 0.77 for 2005 images and 98% and 0.93 for 2015/16 images, indicating that the object-based CNN technique can be effectively implemented for urban tree coverage mapping and monitoring. There was a decline in tree coverage in all suburbs. Mean parcel size and median household income were significantly related to tree cover loss (R2 = 58.5%). Tree cover gain and persistence had positive relationship with tertiary education, parcel size and ownership change (gain: R2 = 67.8% and persistence: R2 = 75.3%). The research findings demonstrated that remote sensing data with intelligent processing can contribute to the development of policy input for management of tree coverage in cities.

Graphical Abstract

1. Introduction

Trees are an important element of the city and suburbs, benefiting and inconveniencing other urbanites in manifold ways [1,2,3,4,5]. Therefore, it is not surprising that there has been a growing literature documenting temporal change in urban tree density and cover [6,7,8,9,10,11] and testing hypotheses on the causes of change [12,13,14,15,16,17,18].
The heterogenous nature of natural and built environments in urban landscapes makes it difficult to quantify and monitor the spatial extent of urban tree canopies [19,20]. These assessments are typically achieved through conventional field-based methods that involve ground-based data collection activities [21,22]. Ground methods are labour and cost-intensive. Making periodic field visits for regular monitoring is not always feasible [19,21,22]. Furthermore, field-based tree cover data collection in cities may be limited by access to private lands. As an alternative to field-based methods, visual interpretation of aerial photography has been used extensively in tree detection since the early 1960s. However, the visual interpretation method is also labour and cost-intensive [21,22].
Remote sensing techniques can effectively map urban trees and monitor the temporal and spatial changes of complex urban environment. Remote sensing assessments can be quicker and more cost-effective than ground-based data collection and can overcome accessibility difficulties [23,24,25,26]. Hence, with the availability of historical remote sensing data and advancement in image resolutions, remote sensing technology can have great utility in mapping and monitoring urban tree cover [23,27,28].
Geographic object-based image analysis (GEOBIA) on very high-resolution satellite imagery has been widely used to measure urban tree cover [28,29]. Our search in Scopus in July 2020 for publications that mentioned “Object Based Image Analysis for Remote Sensing and Urban Trees” had 170 returns. The use of GEOBIA for urban tree extraction has been increasing due to the availability of very high-resolution satellite imagery [30,31] and the introduction of user-friendly GEOBIA software packages [32], including Trimble eCognition (https://geospatial.trimble.com), ENVI feature extraction model (https://www.harrisgeospatial.com) and ERDAS Imagine (https://www.hexagongeospatial.com). These GEOBIA software allow users to develop rulesets based on the study area, available dataset and research objectives in capturing the semantics associated with the geographic features.
An object-based classification approach to map urban forest and to isolate vegetation patches from shrubs to large trees in Phoenix using 0.61-m spatial resolution aerial RGB images was developed by Walker and Briggs [30]. Using the above classification method, Walker and Blaschke [33] generated a transferable object-based ruleset to classify and map the urban forest in the Phoenix Metropolitan area. Zhou et al. [31] applied object-based approach for land cover classification and change detection using high-resolution imagery (0.60 m) for two time periods and LiDAR data. They classified their images into five land cover classes: (1) buildings, (2) pavement, (3) coarse-textured vegetation (trees and shrubs), (4) fine-textured vegetation (herbaceous vegetation and grasses) and (5) bare soil. They compare the accuracy of land cover change method between pixel based and object-based post-classification. Due to the integration of spatial information and expert knowledge into the change detection process, the object-based approach was found to be better, with an overall accuracy of 90% and Kappa coefficient of 0.85, than a pixel-based method with an overall accuracy of 81% and kappa coefficient of 0.71. Moskal et al. [19] used the OBIA approach for land use/land cover (LULC) classification and tree cover assessments in the city of Seattle, WA, USA. They did LULC classification comparisons between 2009 aerial photograph of four bands with 1-m spatial resolutions and QuickBird satellite images of four bands with 0.6-m spatial resolutions of 2009. They found that the spectral properties of remote sensing imagery are more useful than the spatial properties of tree cover assessments in urban environments. Zhou et al. [34] used object-based change detection in multiple levels to map urban vegetation at the individual tree scale. They used nine groups of near-infrared (NIR) aerial images from 1988 to 2006 for Shanghai, China. The ruleset was created by using Normalised Difference Vegetation Index (NDVI), Normalised Difference of Saturation and Brightness (NDSV), density of low-NIR pixels and density of dark details. Banzhaf and Kollai [35] applied the OBIA approach to map urban trees of 10 districts of Leipzig, Germany using four-band digital orthophotos of 2002 and 2012, with spatial resolutions of 0.20-m and LiDAR derivatives (2-m digital elevation model (DEM) and digital surface model (DSM)) for 2012. They used the thresholds of (NDVI) and normalised DSM (DSM-DEM) to identify urban trees. Ejares et al. [36] extracted the tree canopy cover using the OBIA approach from LiDAR data to map trees in urban Barangays of Cebu City, Philippines. The height, area, roundness, slope, length-width and elliptic fit were also evaluated to extract the contextual features of tree canopies. They used multi-thresholds followed by multi-resolution segmentation to segment the surface model into finer objects. The threshold of the CHM (4 m to 40 m) was used as a final classification to extract trees from other classes. The overall accuracy of the tree canopy cover extraction was 96.6%, with a Kappa Index of Agreement (KIA) of 0.9.
The GEOBIA method can be more accurate than methods using pixels, especially for very high-resolution images [28,32,37]. However, problems have been experienced in situations in which over segmentation and under-segmentation appear within the same image [38,39,40,41]. Additionally, feature extraction in urban environments is difficult because of the range of materials that make up the same classes [42] and the occlusion and shadows that break image objects into finer objects [20].
Extracting urban land cover features with high thematic accuracy in an automated way is still a challenging task with GEOBIA, and it demands machine-learning artificial intelligence workflows [43,44,45]. Among numerous alternative techniques, convolutional neural networks (CNNs) [46] are thought to be among the most promising for image classification [47,48,49]. The CNN technique became popular after release of AlexNet in 2012 [50] and with the release of CNN in Google TensorFlow. CNN is a deep-learning, supervised neural network that uses labelled data. CNN works with a combination of input layer, hidden layers with hidden units and an output layer. The hidden units are like neurons that are fully connected with each individual neuron from a previous layer [49,51]. CNN has proven successful in vegetation contexts [52,53,54,55,56,57]. Li et al. [52] used the CNN algorithm in very high-resolution quick bird images for oil palm trees detection in Malaysia and achieved 87.95% overall accuracy. Chen et al. [53] proposed a novel approach based on CNN to count apples and oranges in an unstructured environment with a 0.76 F1 score. Similarly, Wang et al. [54] used a faster region-based CNN (R-CNN) workflow to detect mango fruit flowers. Sa et al. [55] used the R-CNN workflow for sweet pepper and melon detection and achieved accuracy of a 0.84 F1 score. Similarly, Csillik et al. [56] used the CNN workflow, with post-processing using GEOBIA, for identifying citrus trees in a complex agricultural area of California from unmanned aerial vehicle (UAV) imagery, achieving 96.24% overall accuracy. Timilsina et al. [57] demonstrated that the accuracy of the image classifications can be improved by using a combination of OBIA and CNN methods to map the urban tree cover. No study has been published that maps temporal and spatial changes of tree covers using GEOBIA and CNN.
Trees in domestic gardens have been shown to be associated with high levels of household incomes [14,15,16,17,18,58,59], high levels of education [15,16,17,58,60] and large block size [16]. Motives for planting and removing trees have proven to be highly varied, as have preferences for particular types of trees, suggesting that changes of garden ownership may be a major cause of tree changes in suburbia [61,62]. However, neither time since purchase at the parcel level or mean time since purchase at the aggregate level have been included in any of the works that relate tree changes to other variables.
The main objective of this research is to identify the urban tree cover changes in suburban Hobart, Tasmania, Australia between 2005 and 2015/16 using object-based CNN and to model a relationship between tree cover changes and socioeconomic variables. In order to meet the main objective, the following subobjectives are addressed:
  • Perform stratified random sampling to select sample study areas,
  • Process imagery and LiDAR data and generate the canopy height model (CHM) and normalised difference vegetation index (NDVI),
  • Prepare automatic training samples and run object-based CNN for 2005 and 2015/16 images,
  • Perform a sample and parcel level tree cover change analysis between 2005 and 2015/16 and
  • Perform a multiple regression analysis and general linear model (GLM) analysis to model relationships between tree cover changes and socioeconomic variables.
The organisation of this paper is as follows: in Section 2, the study area, datasets and the adapted methodology are presented. Section 3 presents the results. Section 4 presents the discussions from the results, and Section 5 presents the conclusions and possible future works.

2. Materials and Methods

2.1. Study Area and Sample Selection

Fourteen suburbs in the inner and general residential zone of the western suburbs of Hobart, Tasmania, Australia were selected (Figure 1, Table 1) to represent a range in socioeconomic characteristics (median household income and tertiary education). The mean elevation of selected suburbs ranges between 23 and 69 m (https://en-au.topographic-map.com/maps/jqqb/Hobart/). One sample point in each suburb was generated using the “random point creation” tool in ArcGIS Pro 2.4. For each sample point, a sample patch of four hectares was created by buffering sample points (Table 1). A representative raster plot of sample patches is presented in Figure 2 (refer to Appendix A for all the sample patch images). Ten random private cadastral parcels from each sample patch were selected using the “create random points” tool (Figure 3). The selected parcels had to be completely inside the boundary of each sample patch. Roads and parks were excluded from selection.

2.2. Datasets

Very high spatial resolution (VHSR) multispectral QuickBird satellite images (60 cm) with red, green, blue and near infra-red (NIR) (Figure 4a) spectral bands acquired in November 2005, were used for the 2005 measurements. The same type of image was available for December 2015 for the three southernmost suburbs. These images were atmospherically and geometrically corrected. For the other eleven suburbs, January 2016 Google Earth images with red, green, and blue bands (1-m spatial resolutions) (Figure 4b) were downloaded. These December 2015 and January 2016 images were used for the 2015 measurements.
Airborne LiDAR point clouds of the study area for 2008 (Climate Future Mission) and 2011 (Mt. Wellington Mission) were downloaded from the Elevation and Depth Foundation Spatial Data (ELVIS) website. The point clouds of 2008 and 2011 were used to generate canopy height models (CHMs) and further used to define tree height thresholds for 2005 and 2015/16 images, respectively. The technical details of point cloud data acquisition missions are listed in Table 2.
Tenure, land use zoning and area of each cadastral parcel was obtained from [63] (https://listdata.thelist.tas.gov.au/opendata/). Median household income in 2016, and percentage of residents with tertiary qualifications in 2016 were obtained from the Australian Bureau of Statistics (ABS) website [64] (www.abs.gov.au) for the fourteen suburbs. Dates of sales of each parcel in the period 1983–2015 were obtained from the nationally leading property website (www.realestate.com.au). The years between the last sale and 2015 and the number of sales in the period 1983–2015 were extracted from these data.

2.3. Data Preprocessing

2.3.1. Image Georeferencing

Google Earth images for January 2016 were georeferenced to the projected Universal Transverse Mercator (UTM) coordinates of zone 55 (GDA 1994 MGA Zone 55). Reference placemarks with UTM coordinates were marked in the Google Earth screen before capturing. The georeferencing was done by associating recorded corresponding coordinates to the marks. The transformation was done by first-order polynomial (Affine), as it provides better and more accurate transformation results than other techniques [65,66]. The accuracy of rectified images was cross-verified with the 2005 satellite images. The atmospheric and geometric correction of rectified images were determined prior to further image analysis.

2.3.2. Normalised Difference Vegetation Index

The Normalised Difference Vegetation Index (NDVI) was calculated from 2005 satellite images by using the mean of red and near infra-red bands (Equation (1)) [67]. The NDVI value of 0.4 was used as a threshold to identify tree coverage from the 2005 images.
NDVI = Mean   ( NIR ) Mean   ( Red ) Mean   ( NIR ) + Mean   ( Red )

2.3.3. Canopy Height Model (CHM)

The LiDAR point cloud datasets were merged and clipped for the study area for both 2008 and 2011 using LAStools (https://rapidlasso.com/lastools/). Ground and high vegetation points in the classified LiDAR point cloud dataset were represented by class 2 and class 5, respectively. Hence, the digital surface model (DSM) was prepared by filtering the class 5-point cloud using the “las2dem” tool. A digital elevation model (DEM) was generated with the combined point cloud of class 2 and class 5. The canopy height model (CHM) was prepared by subtracting DEM from the DSM (Equation (2), Figure 5) [68] using the “band math” tool in ENVI 5.5. The CHMs for 2008 and 2011 were used in the process of identifying the tree coverage in 2005 and 2015/16 images, respectively.
C H M = D S M D E M

2.4. Preparation of Training Samples

The training sample for the CNN model require at least two land cover classes [69]. Hence, tree and other (nontree) classes were prepared. The tree class represented urban trees of different species within the sample patches, and the other class represented all other nontree features, including grassland, bare land, buildings, water bodies and roads.
Object-based image analysis (OBIA) in eCognition was used to segment images using the multiresolution segmentation algorithm at the pixel level. The tree and nontree classes for the training dataset from the 2005 satellite images were prepared by calculating CHM and NDVI values (Figure 6). The shape and compactness parameters were set to 0.1 and 0.5, respectively. To find the optimum scale factor for segmentation, iterative segmentation was done with different scale factor values ranging from 50 to 0.1 (Table 3) within a 2005 sample patch. A scale factor value of 2 gave the optimum segmentation result for the 2005 image. The maximum and minimum values of CHM and NDVI did not change beyond this level of scale factor 2 (Table 3) and provided a steady state.
The height threshold of five metres as calculated in the CHM was used to separate trees from other vegetation covers. The height threshold was calculated by assuming that a tree of two metres in 2005 would grow at one metre per year.
The 2015/16 images were segmented using the multiresolution segmentation algorithm with the scale, shape and compactness parameters set at 2, 0.1 and 0.5, respectively. The training dataset of tree class for 2015/16 images was prepared by using CHM values only and not including NDVI. This is because of the absence of the NIR band in the 2016 Google Earth image. Those segments with CHM (from the 2011 Lidar point cloud) values greater than and equal to two metres were assigned to the tree class. The representative training samples for trees and other classes were generated from the whole study area. Those trees that were present in 2011 but not in 2015 were manually filtered out by visual examination.

2.5. Object-Based CNN for Tree Cover Identification

Some parts of this section are repeated from an earlier paper by the two senior authors [57].
The CNN workflow of Trimble’s eCognition software Developer 9.4 was applied for tree extraction (Figure 7). This CNN workflow in eCognition software is based on Google TensorFlow API [69]. The overall analysis was done in a computer system having 64-bit operating system, 16 GB RAM and Intel (R) Core (TM) i7-7700 CPU @ 3.60 GHz processor.

2.5.1. Generate Labelled Sample Patches for CNN Model

In deep learning, finding the most suitable architecture for the CNN is still ongoing research. While generating sample patches, there are some parameters that should be considered. They are sample count, sample patch size and image layers. In the present research, 8000 sample patches were generated for the tree and other classes, separately. The sample size was assigned to 22 × 22 pixels. The selection of sample patch size was done by trial-and-error approaches. Values smaller than 22 × 22 increased tree canopy detection error, whereas values larger than 22 × 22 missed some of the small trees. Most of small trees in the study area were found to be within 22 × 22 pixels.
To apply max pooling while creating the CNN model, the size of the input training image should be an even number [69]. The samples were generated based on the thresholds for NDVI and CHM. The generated sample patches were saved in tiff format (Figure 8). It took almost five minutes to generate sample patches for each class. The processing time depends on the number of sample patches to be generated. The higher the number of samples, the more time will be consumed to generate the samples. All four spectral bands (green, red, infrared and blue) were used while generating samples from the 2005 images, whereas three spectral bands (blue, green and red) were used while generating samples from the 2015/16 images.

2.5.2. Create CNN Model

A simple CNN model was created with one hidden layer. The hidden layer is based on the kernel size, number of feature maps and max pooling. As the even-sized kernels will generate hidden units located between pixels and then are shifted to match the pixel borders, old size kernels (13 × 13) were assigned with 40 feature maps. Max pooling using a 2 × 2 filter with a stride of 2 in both horizontal and vertical directions was applied to reduce the resolution of the feature maps. Thus, the weight of 4 × 13 × 13 × 40 corresponds to the hidden layer kernel. The first factor (4) represents the number of image layers, and the second and third factors (13 × 13) describe the number of units in the local neighbourhood, from which connections are forwarded into the hidden layer. The final factor (40) represents the number of feature maps generated. The hidden layer of this network thus contains 27,040 (4 × 13 × 13 × 40) different weights that can be trained.

2.5.3. Train CNN Model

The model was then trained based on the labelled sample patches and the adjusted model weights using backpropagation. The learning rate is an important parameter, as it defines the amount by which weights are adjusted in each iteration of the statistical gradient descent optimisation [69]. The learning rate of 0.0015 was assigned based on trial-and-error. The higher the value of the learning rate, the faster the speed of training, but the bottom of the optimal minimum may not be reached, while smaller values will slow down the training processing and may become stuck in local minima and end up with weights not even close to the optimal settings [69]. Training steps and training samples were set as 5000 and 50, respectively. With the given labelled samples and weight parameters, it took almost 30 min to complete the training process.

2.5.4. Apply CNN Model

After applying the trained CNN model to the input image with four layers in the 2005 image and 3 layers in the 2015 image, heatmaps were produced for the tree class (Figure 9). The algorithm used was “apply convolutional neural network” in eCognition software. The heatmaps show the probability values of trees detected within the range of values 1 to 0 (the values close to 1 indicate the high likelihood of trees and those close to 0 indicate a low likelihood of trees). In order to extract trees from the image, the produced heatmaps were smoothed using a 7 × 7 gaussian filter with a 32-bit float output type. The local maxima of the smoothed heatmap of the trees were generated using a morphology (dilate) filter of 3 × 3 pixels.

2.5.5. Object-Based Classification Refinement

The heatmaps were segmented using multiresolution segmentation with scale factor of 10, shape 0.1 and compactness 0.5. The segments with tree probability values greater than 0.5 were classified into the refined tree class. To reduce the noise on classification due to similar spectral properties of trees, grass and nontree objects, the CHM threshold of less than or equal to 2 m and NDVI threshold of less than 0.1 were applied in the classification. The classified refined tree objects were further refined using the assign merge function, pixel-based object resizing and remove object function. The tree segments with relational borders greater than and equal to 0 and with neighbour tree segments were merged. Growing and shrinking modes with surface tension values greater than or equal to 0.5 and box sizes in X, Y and Z as 5, 5 and 1, respectively, were applied consequently in the pixel-based object resizing algorithm in order to refine the shapes of tree segments. To eliminate smaller segments that were not trees, a number of pixel thresholds was used. Hence, tree segments with areas smaller than or equal to 200 pixels (equivalent to areas of 4.5 square metres) were removed from the trees class. Further, some manual editing was done to refine the tree class. The refined tree class was exported as an ESRI (Environmental Systems Research Institute) shapefile.

2.6. Accuracy Assessment

A manual digitisation of one randomly selected parcel from each of the 14 patches for the 2015/2016 images was used as the ground truth in an accuracy assessment. It was easy to discriminate trees using shape, colour and shadow length. The accuracy of tree detection was compared using true positive (TP), false positive (FP) and false negative (FN) classes at the pixel level [70], as presented in Equations (3) to (6). TP represents those pixels that are correctly identified as trees and that exactly intersect with the ground truth. FPs are the pixels that were classified as tree objects from the CNN classification but those were not trees based on the ground truth. FN corresponds to pixels that are not detected as trees from the applied CNN classification method. Four different statistical parameters associated with TP, FP and FN were used. They are as follow:
Precision   ( P ) = T P T P + F P
Recall   ( R ) = T P T P + F N
F 1   measure   ( F 1 ) = 2 P R P + R
Intersection   Over   Union   ( IOU ) = T P T P + F P + F N
Precision (P) answers the question, “How many of the classified pixels are trees”? Recall (R) determines the proportion of the actual (ground truth) tree pixels that were classified as trees in the image. The balance between P and R was determined using the F1 measurement. The validation metric intersection over union (IOU) was used to measure the accuracy of the classification results based on the ground truth [71]. An IOU value of 100% represents the detected object exactly overlapping with the ground truth mapping, whereas an IOU value of 0% indicates no overlap.

2.7. Statistical Analysis

Statistical analysis was carried out in Minitab 18 software [72]. Regression analysis was performed at the patch level with five predictor variables: income, tertiary education, mean parcel size, mean years since last sale and mean numbers of times sold between 1983 and 2015 to model each of the tree cover loss, gain and persistence. The mean parcel size in the sample level analysis was the average area of the 10 random parcels. Similarly, the mean years since the sale and mean number of times sold were averages of the 10 random parcels. The model with the highest adjusted R2 and all predictor variables with significant (p < 0.05) slopes was selected.
A general linear model (GLM) was used to model each of tree loss, gain and persistence at the parcel level with four predictor variables: sample patch number, parcel size, years from sale and number of times sold. The sample patch number was used as a random variable in this analysis. The others were covariates. Due to the low number of sample patches, an adjusted R2 was used to indicate the level of explanation of alternative models. The model with the highest adjusted R2, and all predictor variables with significant (p < 0.05) slopes was selected.

3. Results

3.1. Accuracy Assessment

The IOU values ranged from 62% to 88%. The F1 measure values ranged from 77% to 94%. The mean IOU value was found to be 70%. The mean precision and recall values were 87% and 85%, respectively.
An overall accuracy of 96% and a kappa coefficient of 0.77 was found for tree extraction for the 2005 data. Whereas, for the 2015/16 data, the accuracy was higher, with 98% overall accuracy and 0.93 kappa coefficient (Figure 10).

3.2. Tree Cover Change

There was a net tree cover loss in all the sample patches. The highest tree cover losses were in the Kingston (18.4%), Blackmans Bay (14.1%) and Kingston Beach (12.9%) sample patches. The lowest tree cover losses were in Chigwell (3.9%), North Hobart (4.2%) and Goodwood (4.6%) (Figure 11).
There was a strong positive relationship between the net tree cover losses of 2005–2015 and tree covers in 2005, with a strong positive residual for the net loss for Kingston and a strong negative residual for North Hobart (Figure 12).
The best model for tree cover loss at the patch level had positive influences from income and mean parcel size (Table 4). At the parcel level, the parcel size was the only predictor of tree cover loss, with the larger the parcel, the greater the tree loss (Table 5). The best model for tree cover gain at the patch level had positive influences from tertiary education, mean parcel size and mean years since sale (Table 4). At the parcel level, a poorly explanatory model had positive influences from parcel size and years since sale (Table 5). Tree persistence was well-explained at the patch level by tertiary education, mean parcel size and mean years since sale, all with positive influences (Table 4). At the parcel level, only the influence of the parcel size remained (Table 5).

4. Discussion

4.1. Object-Based CNN Method for Urban Tree Cover Mapping

Mapping urban tree cover changes with high thematic accuracy in an automated way is a challenging task, and various attempts have been made in the past. Ellis and Mathews [73] used OBIA to find out urban tree canopy changes between 2006 and 2013 in Oklahoma City using RGB aerial imagery of one-metre spatial resolution and LiDAR data. Guo et al. [7] used very high resolution RGB aerial images of 2011 (0.1 m) and 2015/16 (0.075 m) and a LiDAR dataset of 2011 to map city-wide canopy cover changes of Christchurch, New Zealand using OBIA and the random forest classifier. However, both studies [7,73] acknowledged that their tree extraction results could have been better if they could have used aerial imagery with a near-infrared (NIR) band to fix the misclassifications caused by spectral similarities between roof materials and trees. In the present study, we used CHM and NDVI values as the thresholds to generate training samples of tree classes from 2005 satellite imagery. These derived threshold values are the result of using the NIR band. However, due to unavailability of the NIR band for the 2015/16 imagery, we generated tree training samples from RGB bands using only the threshold of the CHM with manual editing.
Branson et al. [74] also used aerial and Google Street View images to extract urban trees, detect the species of trees and map the tree species cover changes of a city of California, USA using the state-of-the-art CNN method. In contrast to the method of [74], we used LiDAR data to extract urban trees from Google Earth images using object-based CNN. The use of LiDAR data provided an accurate extent and location of the tree considering the third dimension on top of latitude and longitude.
In the present study, the CNN model was trained by using automatically generated samples. The object-based CNN method when trained with manually generated samples might produce better accuracy than the present research if applied to very high-resolution multispectral imagery [56]. However, the manual preparation of training samples might not be always feasible in terms of time and costs.
A comparison with previous relevant studies using the OBIA and CNN methods for urban tree cover mapping reveals a novelty in the combination of the use of LiDAR, very high-resolution satellite imagery, aerial imagery and the latest Google Earth imagery, with an overall accuracy of above 95% based on the confusion matrix and 70% based on IOU.

4.2. Urban Tree Cover Change

The influence on tree cover gain and tree cover persistence of years since sale of house (Figure 13 and Figure 14) is consistent with the hypothesis that tree change is associated with changes in garden/parcel ownership [61,62]. Gain would result from the growth of new trees planted soon after possession and those allowed to survive. Persistence would reflect the stability of trees in long-possessed gardens. The lack of a negative effect of time since sale on tree cover loss over the decade may relate to a putatively short period in which trees are removed to satisfy preferences for other trees or less trees. If this period were a year and there was a ten percent house turnover per annum, the same tree loss would be expected in each of the ten years between 2005 and 2015, making it unlikely that the time since sale would have a linear relationship with tree loss. In contrast, all gains would be incremental after the initial loss.
The net tree cover loss contrasts with the widespread tree density gain recorded for Hobart in an earlier period (1961–2006) [16] but is consistent with some other observations from Australia [75,76,77,78] and elsewhere [7,73,79,80,81]. Tree cover is likely to be predicted by tree density, except where very recent suburbs on previously treeless areas are contrasted with older suburbs or where houses that were built amongst pre-existing trees are contrasted with suburbs of the same age built in treeless areas. The highest losses of tree cover between 2005 and 2015 were in those areas where new developments of houses occurred amongst indigenous trees. The removal of older local indigenous trees tends to occur gradually, as they drop limbs. The older suburbs and those developed on farmlands did not exhibit high levels of net tree losses.
Variations of tree cover loss, gain and persistence with parcel sizes (Figure 13, Figure 14 and Figure 15) was expected, because the opportunity to lose trees is much greater with more trees in more spaces [16]. The positive effects of high proportions of householders with tertiary incomes on tree gain and persistence (Figure 14 and Figure 15) is consistent with the influence of a tertiary education on garden complexity [15].
The significant relationship at the patch scale between the tree cover loss and median household income with the parcel size (Figure 13) held constant is superficially puzzling, given that the household income was the best predictor of the percentage frequency of trees in front gardens in Hobart suburbs out of many socioeconomic, environmental and demographic variables [15]. The positive correlation between household income and tree cover loss might be taken to indicate that people with higher household incomes can better afford tree removal from their properties than poorer people or that people with higher incomes are more likely to perform building extensions, landscaping, and other structural development activities that result in tree losses. However, the main reason is likely to be that income relates closely to absolute tree abundance, so equal proportionate losses will result in higher absolute losses in richer areas. Our loss figures are the absolute percentage of a block from which the tree cover has disappeared, not a percentage of the 2005 cover.

5. Limitation

The main limitation of this research is the time difference between the used remote-sensing images (2005 and 2015/16) and LIDAR dataset (2008 and 2011). This could have introduced error in the analysis, because the analysis uses the CHM generated from the LiDAR dataset to identify the tree cover. This means those trees that have been cleared in between the acquisition of the LiDAR data (2008) and orthophoto (2005) may not have been classified as trees. On the other hand, those planted after the acquisition of LiDAR data (2011) and taller than two metres during the orthophoto acquisition (2015/16) might not be classified as trees. Additionally, the inconsistency in the spatial resolution of input images due to different sources—QuickBird satellite images, Google Earth images and aerial images—might have introduced some errors.

6. Conclusions

Urban trees have economic, environmental and socioeconomic benefits to the extent that their maintenance or increase are often objectives for governments. The development and implementation of policies requires accurate data on tree changes. The present research successfully maps tree cover changes and models the relationship of changes with socioeconomic factors. This research has made three major contributions. First, the use of automatically generated training samples to train the CNN model. Second, the application of a combined CNN and OBIA method to map urban trees and urban tree cover changes per sample and a cadastral parcel spatial analysis unit. Third, to model the relationship between tree cover change and socioeconomic variables. A net tree cover loss was measured in the study area of Greater Hobart between 2005 and 2015/16. This finding may motivate local councils to make plans and policies to reverse this tendency, such as increasing tree planting on public lands.
This research uses a simple CNN model with a single hidden layer. In future research, multiple hidden layers with a change in parameters can be applied and tested. Similarly, deeper CNN methods, including region-based CNN (R-CNN) and fully connected CNN (F-CNN), can be further tested for urban tree coverage mapping and tree species identification.
Five socioeconomic predictor variables were used to model the tree cover changes using a regression analysis. Topographic and climatic variables, such as slope, elevation, aspect, solar radiation, geology and precipitation could be used as predictors in developing higher-order spatial-statistical methods that may help in further understanding spatial and temporal associations in tree cover change mapping.

Author Contributions

Conceptualisation, S.T., J.A. and J.B.K.; methodology, S.T., J.A. and J.B.K.; software, S.T.; validation, S.T.; formal analysis, S.T., J.A. and J.B.K.; investigation, S.T., J.A. and J.B.K.; data curation, S.T., J.A. and J.B.K.; writing—original draft preparation, S.T.; writing—review and editing, S.T., J.A. and J.B.K.; visualisation, S.T. and supervision, J.A. and J.B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors wish to thank the Kingborough Council, Tasmania for providing the aerial imagery of the study area. We are grateful to the University of Tasmania for providing research facilities. We also thank Google Earth and Land Information System Tasmania (LIST) for providing imagery and LiDAR point cloud and cadastral parcel datasets.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. From left to right, first column (1) represents the satellite images of 2005; second column (2) represents aerial images of 2015 (a2,b2,c2) and Google images of 2016 (d2 onwards) and the third column (3) represents the tree cover area changes between 2005 and 2015/16 in terms of tree cover area loss (red), gain (green) and persistence (blue).
Figure A1. From left to right, first column (1) represents the satellite images of 2005; second column (2) represents aerial images of 2015 (a2,b2,c2) and Google images of 2016 (d2 onwards) and the third column (3) represents the tree cover area changes between 2005 and 2015/16 in terms of tree cover area loss (red), gain (green) and persistence (blue).
Remotesensing 12 03017 g0a1aRemotesensing 12 03017 g0a1bRemotesensing 12 03017 g0a1cRemotesensing 12 03017 g0a1d

References

  1. Bolund, P.; Hunhammar, S. Ecosystem services in urban areas. Ecol. Econ. 1999, 29, 293–301. [Google Scholar]
  2. Lohr, V.; Pearson-Mims, C.; Tarnai, J.; Dillman, D. How Urban Residents Rate and Rank the Benefits and Problems Associated with Trees in Cities. J. Arboric. 2004, 1, 28–35. [Google Scholar]
  3. Shackleton, S.; Chinyimba, A.; Hebinck, P.; Shackleton, C.; Kaoma, H. Multiple benefits and values of trees in urban landscapes in two towns in northern South Africa. Landsc. Urban Plan. 2015, 136, 76–86. [Google Scholar] [CrossRef]
  4. Solecki, W.D.; Welchb, J.M. Urban parks: Green spaces or green walls? Landsc. Urban Plan. 1995, 32, 93–106. [Google Scholar]
  5. Tyrväinen, L.; Silvennoinen, H.; Kolehmainen, O. Ecological and aesthetic values in urban forest management. Urban For. Urban Green. 2003, 1, 15–149. [Google Scholar] [CrossRef]
  6. Erker, T.; Wang, L.; Lorentz, L.; Stoltman, A.; Townsend, P.A. A statewide urban tree canopy mapping method. Remote Sens. Environ. 2019, 229, 148–158. [Google Scholar] [CrossRef]
  7. Guo, T.; Morgenroth, J.; Conway, T.; Xu, C. City-wide canopy cover decline due to residential property redevelopment in Christchurch, New Zealand. Sci. Total Environ. 2019, 681, 202–210. [Google Scholar] [CrossRef]
  8. Nowak, D.J.; Rowntree, R.A.; McPherson, E.G.; Sisinni, S.M.; Kerkmann, E.R.; Stevens, J.C. Measuring and analyzing urban tree cover. Landsc. Urban Plan. 1996, 36, 49–57. [Google Scholar]
  9. Schneider, A. Monitoring land cover change in urban and peri-urban areas using dense time stacks of Landsat satellite data and a data mining approach. Remote Sens. Environ. 2012, 124, 689–704. [Google Scholar] [CrossRef]
  10. Stave, J.; Oba, G.; Stenseth, N.C. Temporal changes in woody-plant use and the ekwar indigenous tree management system along the Turkwel River, Kenya. Environ. Conserv. 2001, 28, 150–159. [Google Scholar] [CrossRef]
  11. Tucker Lima, J.M.; Staudhammer, C.L.; Brandeis, T.J.; Escobedo, F.J.; Zipperer, W. Temporal dynamics of a subtropical urban forest in San Juan, Puerto Rico, 2001–2010. Landsc. Urban Plan. 2013, 120, 96–106. [Google Scholar] [CrossRef]
  12. Bowden, L.W. Urban environments: Inventory and analysis. Man. Remote Sens. 1975, 12, 1815–1880. [Google Scholar]
  13. Grove, J.M.; Troy, A.R.; O’Neil-Dunne, J.P.M.; Burch, W.R.; Cadenasso, M.L.; Pickett, S.T.A. Characterization of households and its implications for the vegetation of urban ecosystems. Ecosystems 2006, 9, 578–597. [Google Scholar] [CrossRef]
  14. Iverson, L.R.; Cook, E.A. Urban forest cover of the Chicago region and its relation to household density and income. Urban Ecosyst. 2000, 4, 105–124. [Google Scholar] [CrossRef]
  15. Kirkpatrick, J.B.; Daniels, G.D.; Zagorski, T. Explaining variation in front gardens between suburbs of Hobart, Tasmania, Australia. Landsc. Urban Plan. 2007, 79, 314–322. [Google Scholar] [CrossRef]
  16. Kirkpatrick, J.B.; Daniels, G.D.; Davison, A. Temporal and spatial variation in garden and street trees in six eastern Australian cities. Landsc. Urban Plan. 2011, 101, 244–252. [Google Scholar] [CrossRef]
  17. Martin, C.A.; Paige, S.W.; Kinzig, A.P. Neighbourhood socioeconomic status is a useful predictor of perennial landscape vegetation in residential neighbourhoods and embedded small parks of Phoenix, AZ. Landsc. Urban Plan. 2004, 69, 355–368. [Google Scholar] [CrossRef]
  18. Talarchek, G.M. The Urban forest of New Orleans: An exploratory analysis of relationship. Urban Geogr. 1990, 11, 65–86. [Google Scholar] [CrossRef]
  19. Moskal, L.M.; Styers, D.M.; Halabisky, M. Monitoring urban tree cover using object-based image analysis and public domain remotely sensed data. Remote Sens. 2011, 3, 2243–2262. [Google Scholar] [CrossRef] [Green Version]
  20. Ehlers, M.; Gähler, M.; Janowsky, R. Automated analysis of ultra high resolution remote sensing data for biotope type mapping: New possibilities and challenges. ISPRS J. Photogramm. Remote Sens. 2003, 57, 315–326. [Google Scholar] [CrossRef]
  21. Mikita, T.; Janata, P.; Surovỳ, P. Forest stand inventory based on combined aerial and terrestrial close-range photogrammetry. Forests 2016, 7, 165. [Google Scholar] [CrossRef] [Green Version]
  22. Ke, Y.; Quackenbush, L.J. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Internatl. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
  23. Xiao, Q.; McPherson, E.G. Tree health mapping with multispectral remote sensing data at UC Davis, California. Urban Ecosyst. 2005, 8, 349–361. [Google Scholar] [CrossRef]
  24. Anees, A.; Aryal, J. A Statistical Framework for Near-Real Time Detection of Beetle Infestation in Pine Forests Using MODIS Data. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1717–1721. [Google Scholar] [CrossRef]
  25. Anees, A.; Aryal, J. Near-real time detection of beetle infestation in pine forests using MODIS data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3713–3723. [Google Scholar] [CrossRef]
  26. Anees, A.; Aryal, J.; O’Reilly, M.M.; Gale, T.J. A Relative Density Ratio-Based Framework for Detection of Land Cover Changes in MODIS NDVI Time Series. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3359–3371. [Google Scholar] [CrossRef]
  27. Rogan, J.; Chen, D.M. Remote sensing technology for mapping and monitoring land-cover and land-use change. Prog. Plann. 2004, 61, 301–325. [Google Scholar] [CrossRef]
  28. Ardila, J.P.; Bijker, W.; Tolpekin, V.A.; Stein, A. Context-sensitive extraction of tree crown objects in urban areas using VHR satellite images. Int. J. Appl. Earth Obs. Geoinf. 2012, 15, 57–69. [Google Scholar] [CrossRef] [Green Version]
  29. O’Neil-Dunne, J.; MacFaden, S.; Royar, A. A versatile, production-oriented approach to high-resolution tree-canopy mapping in urban and suburban landscapes using GEOBIA and data fusion. Remote Sens. 2014, 6, 12837–12865. [Google Scholar] [CrossRef] [Green Version]
  30. Walker, J.S.; Briggs, J.M. An Object-oriented Approach to Urban Forest Mapping in Phoenix. Photogramm. Eng. Remote Sens. 2007, 73, 577–583. [Google Scholar]
  31. Zhou, W.; Troy, A.; Grove, M. Object-based Land Cover Classification and Change Analysis in the Baltimore Metropolitan Area Using Multitemporal High Resolution Remote Sensing Data. Sensors 2008, 8, 1613–1636. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  33. Walker, J.S.; Blaschke, T. Object-based land-cover classification for the Phoenix metropolitan area: Optimization vs. transportability. Int. J. Remote Sens. 2008, 29, 2021–2040. [Google Scholar] [CrossRef]
  34. Zhou, J.; Yu, B.; Qin, J. Multi-level spatial analysis for change detection of urban vegetation at individual tree scale. Remote Sens. 2014, 6, 9086–9103. [Google Scholar] [CrossRef] [Green Version]
  35. Banzhaf, E.; Kollai, H. Monitoring the urban tree cover for urban ecosystem services—The case of Leipzig, Germany. In Proceedings of the 36th International Symposium on Remote Sensing of Environment, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Berlin, Germany, 11–15 May 2015; Volume 40, pp. 301–305. [Google Scholar] [CrossRef] [Green Version]
  36. Ejares, J.A.; Violanda, R.R.; Diola, A.G.; Dy, D.T.; Otadoy, J.B.; Otadoy, R.E.S. Tree canopy cover mapping using LiDAR in urban barangays of Cebu City, central Philippines. In Proceedings of the XXIII ISPRS Congress, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; Volume 41, pp. 611–615. [Google Scholar] [CrossRef]
  37. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.Q.; Van der Meer, F.; Van der Werff, H.; Van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [Green Version]
  38. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  39. Drǎguţ, L.; Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  40. Jin, B.; Ye, P.; Zhang, X.; Song, W.; Li, S. Object-Oriented Method Combined with Deep Convolutional Neural Networks for Land-Use-Type Classification of Remote Sensing Images. J. Indian Soc. Remote Sens. 2019, 47, 951–965. [Google Scholar] [CrossRef] [Green Version]
  41. Ming, D.; Li, J.; Wang, J.; Zhang, M. Scale parameter selection by spatial statistics for GeOBIA: Using mean-shift based multi-scale segmentation as an example. ISPRS J. Photogramm. Remote Sens. 2015, 106, 28–41. [Google Scholar] [CrossRef]
  42. Du, S.; Shy, M.; Wang, Q. Modelling relational contexts in GEOBIA framework for improving urban land-cover mapping. GISci. Remote Sens. 2019, 56, 184–209. [Google Scholar] [CrossRef]
  43. Belgiu, M.; Tomljenovic, I.; Lampoltshammer, T.J.; Blaschke, T.; Höfle, B. Ontology-based classification of building types detected from airborne laser scanning data. Remote Sens. 2014, 6, 1347–1366. [Google Scholar] [CrossRef] [Green Version]
  44. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  45. Heumann, B.W. An object-based classification of mangroves using a hybrid decision tree-support vector machine approach. Remote Sens. 2011, 3, 2440–2460. [Google Scholar] [CrossRef] [Green Version]
  46. Fukushima, K. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Netw. 1988, 1, 119–130. [Google Scholar] [CrossRef]
  47. Fu, T.; Ma, L.; Li, M.; Johnson, B.A. Using convolutional neural network to identify irregular segmentation objects from very high-resolution remote sensing imagery. J. Appl. Remote Sens. 2018, 12, 025010. [Google Scholar] [CrossRef]
  48. Zhang, Q.; Wang, Y.; Liu, Q.; Liu, X.; Wang, W. CNN based suburban building detection using monocular high resolution Google Earth images. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 661–664. [Google Scholar] [CrossRef]
  49. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  50. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K. The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. arXiv 2018, arXiv:1803.01164. [Google Scholar]
  51. Zhou, W.; Newsam, S.; Li, C.; Shao, Z. Learning low dimensional convolutional neural networks for high-resolution remote sensing image retrieval. Remote Sens. 2017, 9, 489. [Google Scholar] [CrossRef] [Green Version]
  52. Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Taylor, C.J.; Kumar, V. Counting Apples and Oranges with Deep Learning: A Data-Driven Approach. IEEE Robot. Autom. Lett. 2017, 2, 781–788. [Google Scholar] [CrossRef]
  53. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of Citrus Trees from Unmanned Aerial Vehicle Imagery Using Convolutional Neural Networks. Drones 2018, 2, 39. [Google Scholar] [CrossRef] [Green Version]
  54. Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. Deepfruits: A fruit detection system using deep neural networks. Sensors 2016, 16, 1222. [Google Scholar] [CrossRef] [Green Version]
  55. Li, W.; Dong, R.; Fu, H.; Yu, L. Large-scale oil palm tree detection from high-resolution satellite images using two-stage convolutional neural networks. Remote Sens. 2019, 11, 11. [Google Scholar] [CrossRef] [Green Version]
  56. Wang, Z.; Underwood, J.; Walsh, K.B. Machine vision assessment of mango orchard flowering. Comput. Electron. Agric. 2018, 151, 501–511. [Google Scholar] [CrossRef]
  57. Timilsina, S.; Sharma, S.K.; Aryal, J. Mapping Urban Trees Within Cadastral Parcels Using an Object-based Convolutional Neural Network. In Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; IV-5/W2; Copernicus Publications: Göttingen, Germany, 2019; pp. 111–117. [Google Scholar] [CrossRef] [Green Version]
  58. Fan, C.; Johnston, M.; Darling, L.; Scott, L.; Liao, F.H. Land use and socio-economic determinants of urban forest structure and diversity. Landsc. Urban. Plan. 2019, 181, 10–21. [Google Scholar] [CrossRef]
  59. Steenberg, J.W.N.; Robinson, P.J.; Duinker, P.N. A spatio-temporal analysis of the relationship between housing renovation, socioeconomic status, and urban forest ecosystems. Environ. Plan. B Urban. Anal. City Sci. 2018, 46, 1115–1131. [Google Scholar] [CrossRef]
  60. Grove, J.M.; Burch, W.R.J. A social ecosystem approach and applications of urban ecosystem and landscape analyses: A case study of Baltimore, Maryland. Urban Ecosyst. 1997, 1, 259–275. [Google Scholar]
  61. Kirkpatrick, J.B.; Davison, A.; Daniels, G.D. Resident attitudes towards trees influence the planting and removal of different types of trees in eastern Australian cities. Landsc. Urban Plan. 2012, 107, 147–158. [Google Scholar] [CrossRef]
  62. Kirkpatrick, J.B.; Davison, A.; Daniels, G.D. Sinners, scapegoats or fashion victims? Understanding the deaths of trees in the green city. Geoforum 2013, 48, 165–176. [Google Scholar] [CrossRef]
  63. TheLIST, Land Information System Tasmania Data. 2019. Available online: https://listdata.thelist.tas.gov.au/opendata/ (accessed on 10 September 2019).
  64. Australian Bureau of Statistics, Australian Bureau of Statistics Belconnen, ACT. 2019. Available online: https://www.abs.gov.au/ (accessed on 5 October 2019).
  65. Bolstad, P. GIS Fundamentals: A First Text on Geographic Information Systems, 4th ed.; Eider Press: White Bear Lake, MN, USA, 2012. [Google Scholar]
  66. Yang, C. A high-resolution airborne four-camera imaging system for agricultural remote sensing. Comput. Electron. Agric. 2012, 88, 13–24. [Google Scholar] [CrossRef]
  67. Bannari, A.; Morin, D.; Bonn, F. A Review of Vegetation Indices. Remote Sens. Rev. 1995, 13, 95–120. [Google Scholar] [CrossRef]
  68. Dubayah, R.O.; Drake, J.B. Lidar Remote Sensing for Forestry. J. For. 2000, 98, 44–46. [Google Scholar]
  69. Trimble eCogntion Software, Tutorial 7—Convolutional Neural Networks in eCognition. 2019. Available online: https://docs.ecognition.com/v9.5.0/Resources/Images/Tutorial 7-Convolutional Neural Networks in eCognition.pdf (accessed on 10 September 2020).
  70. Ghorbanzadeh, O.; Blaschke, T.; Gholamnia, K.; Meena, S.R.; Tiede, D.; Aryal, J. Evaluation of Different Machine Learning Methods and Deep-Learning Convolutional Neural Networks for Landslide Detection. Remote Sens. 2019, 11, 196. [Google Scholar] [CrossRef] [Green Version]
  71. Chen, L.-C.; Barron, J.T.; Papandreou, G.; Murphy, K.; Yuille, A.L. Semantic Image Segmentation with Task-Specific Edge Detection Using CNNs and a Discriminatively Trained Domain Transform. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4545–4554. [Google Scholar]
  72. Minitab Inc. User’s Guide: Data Analysis and Quality Tools; Release 12. Minitab: State College, PA, USA, 1998. [Google Scholar]
  73. Ellis, E.A.; Mathews, A.J. Object-based delineation of urban tree canopy: Assessing change in Oklahoma City, 2006–2013. Comput. Environ. Urban. Syst. 2019, 73, 85–94. [Google Scholar] [CrossRef]
  74. Branson, S.; Wegner, J.D.; Hall, D.; Lang, N.; Schindler, K.; Perona, P. From Google Maps to a fine-grained catalog of street trees. ISPRS J. Photogramm. Remote Sens. 2018, 135, 13–30. [Google Scholar] [CrossRef] [Green Version]
  75. Ballantyne, M.; Pickering, C.M. Differences in the impacts of formal and informal recreational trails on urban forest loss and tree structure. J. Environ. Manag. 2015, 159, 94–105. [Google Scholar] [CrossRef] [Green Version]
  76. Brunner, J.; Cozens, P. Where Have All the Trees Gone? Urban Consolidation and the Demise of Urban Vegetation: A Case Study from Western Australia. Plan. Pract. Res. 2013, 28, 231–255. [Google Scholar] [CrossRef]
  77. Kaspar, J.; Kendal, D.; Sore, R.; Livesley, S.J. Urban Forestry & Urban Greening Random point sampling to detect gain and loss in tree canopy cover in response to urban densification. Urban For. Urban Green. 2017, 24, 26–34. [Google Scholar] [CrossRef]
  78. Lin, B.; Meyers, J.; Barnett, G. Understanding the potential loss and inequities of green space distribution with urban densification. Urban For. Urban Green. 2015, 14, 952–958. [Google Scholar] [CrossRef]
  79. Ossola, A.; Hopton, M.E. Measuring urban tree loss dynamics across residential landscapes. Sci. Total Environ. 2018, 612, 940–949. [Google Scholar] [CrossRef]
  80. Pauleit, S.; Ennos, R.; Golding, Y. Modeling the environmental impacts of urban land use and land cover change—A study in Merseyside, UK. Landscap. Urban Plan. 2005, 71, 295–310. [Google Scholar] [CrossRef]
  81. Potapov, P.V.; Turubanova, S.A.; Hansen, M.C.; Adusei, B.; Broich, M.; Altstatt, A.; Mane, L.; Justice, C.O. Quantifying forest cover loss in Democratic Republic of the Congo, 2000–2010, with Landsat ETM+ data. Remote Sens. Environ. 2012, 122, 106–116. [Google Scholar] [CrossRef]
Figure 1. Hobart inner and general residential zoning with fourteen random sample points and patches of four hectares (seven sample points/patches in Glenorchy Municipality, four in Hobart Municipality and three in Kingborough Municipality).
Figure 1. Hobart inner and general residential zoning with fourteen random sample points and patches of four hectares (seven sample points/patches in Glenorchy Municipality, four in Hobart Municipality and three in Kingborough Municipality).
Remotesensing 12 03017 g001
Figure 2. Raster plot (50 × 50 m) of a sample patch image of the Claremont suburb of Glenorchy, Tasmania: (a) 2005 satellite image with four bands (red, green, blue and near-infrared) and (b) 2016 Google Earth images with three bands (red, green and blue).
Figure 2. Raster plot (50 × 50 m) of a sample patch image of the Claremont suburb of Glenorchy, Tasmania: (a) 2005 satellite image with four bands (red, green, blue and near-infrared) and (b) 2016 Google Earth images with three bands (red, green and blue).
Remotesensing 12 03017 g002
Figure 3. Sample patch and parcel selection methodology applied in this research; different boxes represent the sequential steps followed in executing the research.
Figure 3. Sample patch and parcel selection methodology applied in this research; different boxes represent the sequential steps followed in executing the research.
Remotesensing 12 03017 g003
Figure 4. Spectral profile with pixel values on the X-axis and frequency on the Y-axis: (a) 2005 satellite image with four bands (red, green, blue and near-infrared) and (b) 2016 Google Earth images with three bands (red, green and blue).
Figure 4. Spectral profile with pixel values on the X-axis and frequency on the Y-axis: (a) 2005 satellite image with four bands (red, green, blue and near-infrared) and (b) 2016 Google Earth images with three bands (red, green and blue).
Remotesensing 12 03017 g004
Figure 5. Canopy height model (CHM) generation from the digital surface model (DSM) and digital elevation model (DEM).
Figure 5. Canopy height model (CHM) generation from the digital surface model (DSM) and digital elevation model (DEM).
Remotesensing 12 03017 g005
Figure 6. Training sample of tree class (parrot green) within a sample area of the 2005 image using the normalised difference vegetation index (NDVI) and canopy height model (CHM) thresholds.
Figure 6. Training sample of tree class (parrot green) within a sample area of the 2005 image using the normalised difference vegetation index (NDVI) and canopy height model (CHM) thresholds.
Remotesensing 12 03017 g006
Figure 7. Flowchart showing a convolutional neural network (CNN) and classification refinement workflow in eCognition software.
Figure 7. Flowchart showing a convolutional neural network (CNN) and classification refinement workflow in eCognition software.
Remotesensing 12 03017 g007
Figure 8. Example of 22 × 22 pixels samples generated from the CNN. Images in the first row represent example samples of the tree class, and images in the second row represent example samples of the other (nontree) class.
Figure 8. Example of 22 × 22 pixels samples generated from the CNN. Images in the first row represent example samples of the tree class, and images in the second row represent example samples of the other (nontree) class.
Remotesensing 12 03017 g008
Figure 9. From left to right: (a) sample of original 2015 aerial image, (b) segmented image, (c) segmented image with tree training samples (red polygon) and (d) tree heatmap (the values close to 1 (red) indicate the high likelihood of trees, and those close to 0 (blue) indicate a low likelihood of trees).
Figure 9. From left to right: (a) sample of original 2015 aerial image, (b) segmented image, (c) segmented image with tree training samples (red polygon) and (d) tree heatmap (the values close to 1 (red) indicate the high likelihood of trees, and those close to 0 (blue) indicate a low likelihood of trees).
Remotesensing 12 03017 g009
Figure 10. Accuracy assessment of tree extraction using the object-based convolutional neural network (OB-CNN) for 2005 and 2015/16.
Figure 10. Accuracy assessment of tree extraction using the object-based convolutional neural network (OB-CNN) for 2005 and 2015/16.
Remotesensing 12 03017 g010
Figure 11. Percentages of tree cover area losses, gains and persistence within fourteen sample patches with four hectares of area each.
Figure 11. Percentages of tree cover area losses, gains and persistence within fourteen sample patches with four hectares of area each.
Remotesensing 12 03017 g011
Figure 12. Percentage tree cover in 2005 versus net tree cover losses between 2005 and 2015/16 with fourteen sample patches with four hectares of area each (equation of trendline: net tree cover losses between 2005 and 2015/16 = 0.3318 tree cover (2005) + 0.138; R2 = 0.55).
Figure 12. Percentage tree cover in 2005 versus net tree cover losses between 2005 and 2015/16 with fourteen sample patches with four hectares of area each (equation of trendline: net tree cover losses between 2005 and 2015/16 = 0.3318 tree cover (2005) + 0.138; R2 = 0.55).
Remotesensing 12 03017 g012
Figure 13. Sample level tree cover loss versus median household income ($AUD) and mean parcel area (m2).
Figure 13. Sample level tree cover loss versus median household income ($AUD) and mean parcel area (m2).
Remotesensing 12 03017 g013
Figure 14. Sample level tree cover gain versus mean parcel area (m2), mean number of years between the most recent sale in the period 2000–2015 and 2015 and tertiary education (%).
Figure 14. Sample level tree cover gain versus mean parcel area (m2), mean number of years between the most recent sale in the period 2000–2015 and 2015 and tertiary education (%).
Remotesensing 12 03017 g014
Figure 15. Sample level tree cover persistence versus mean parcel area (m2), mean number of years between the most recent sale in the period 2000–2015 and 2015 and tertiary education (%).
Figure 15. Sample level tree cover persistence versus mean parcel area (m2), mean number of years between the most recent sale in the period 2000–2015 and 2015 and tertiary education (%).
Remotesensing 12 03017 g015
Table 1. Sample patch area representation in the inner and general residential zone suburbs.
Table 1. Sample patch area representation in the inner and general residential zone suburbs.
Sample NumberSuburb NameInner and General Residential Zone Area in HectareSample Patch Area Representation (%)
1Claremont433.930.92
2Chigwell81.364.92
3Berriedale143.742.78
4Montrose83.924.77
5Goodwood34.6111.56
6Glenorchy416.490.96
7Moonah163.202.45
8New Town209.631.91
9North Hobart61.476.51
10West Hobart188.262.12
11Sandy Bay372.221.07
12Kingston463.710.86
13Kingston Beach75.925.27
14Blackmans Bay274.971.45
Table 2. LiDAR dataset specification (The LIST, 2019).
Table 2. LiDAR dataset specification (The LIST, 2019).
DescriptionClimate Future MissionMt. Willington Mission
Acquisition start date04 March 200820 January 2011
Acquisition end date09 March 200828 January 2011
Device nameOptech OrionOptech “ALTM Gemini”
Laser returns1st, 2nd, 3rd and Last1st, 2nd, 3rd and Last
Average point density (per square metre)1.51
Flying height800 m1400 m
Swath width700 m1040 m
Side overlap30%40%
Spatial accuracy horizontal0.25 m0.30 m
Spatial accuracy vertical0.25 m0.15 m
Horizontal datumGDA94GDA94
Vertical datumAHDAHD
Table 3. List of the number of objects, maximum and minimum canopy height model (CHM) and normalised difference vegetation index (NDVI) values from iterative segmentation with different scale factors. The maximum and minimum values of CHM and NDVI did not change beyond scale factor 2.
Table 3. List of the number of objects, maximum and minimum canopy height model (CHM) and normalised difference vegetation index (NDVI) values from iterative segmentation with different scale factors. The maximum and minimum values of CHM and NDVI did not change beyond scale factor 2.
Scale FactorNo. of ObjectsCHM Value (Metre)NDVI Value
MinMaxMinMax
5061509.71−0.0590.912
401051011.85−0.0780.953
302593013.73−0.0910.984
203180016.07−0.1511
1010,777028.45−0.4061
541,742028.45−0.3121
2200,907031.56−0.4661
1299,065031.56−0.4661
0.5325,070031.56−0.4661
0.25345,384031.56−0.4661
0.1345,384032.56−0.4661
Table 4. Best fit multiple regression model to predict the tree cover loss, gain and persistence in patch level.
Table 4. Best fit multiple regression model to predict the tree cover loss, gain and persistence in patch level.
ModelResponsePredictorp-ValueR2Equation
SL1Tree cover lossMedian household weekly income (2016)0.01358.5%Loss = −17.27 + 0.01099 Income + 0.01972 Parcel size
Mean parcel size (2015)0.006
SL2Tree cover gainTertiary education (2016)0.00467.8%Gain = −8.53 + 0.1505 education + 0.00765 Parcel size + 0.2679 mean no. of years sale between 1983 and 2015
Mean parcel size (2015)0.025
Mean no. of years sale between 1983 and 20150.012
SL3Tree cover persistenceTertiary education (2016)0.00175.3%Persistence = −30.91 + 0.4121 education + 0.03162 Parcel size + 0.555 mean no. of years sale between 1983 and 2015
Mean parcel size (2015)0.001
Mean no. of years sale between 1983 and 20150.020
Table 5. Best fit multiple regression model to predict the tree cover loss, gain and persistence in parcel level.
Table 5. Best fit multiple regression model to predict the tree cover loss, gain and persistence in parcel level.
ModelResponsePredictorp-ValueR2Equation
PL1Tree cover lossParcel size (2015)<0.00147.6%Loss = −26.8 + 0.1973 Parcel size
PL2Tree cover gainParcel size (2015)0.0029.9%Gain = −0.37 + 0.02884 Parcel size + 0.606 no. of years sale between 1983 and 2015
No. of years sale between 1983 and 20150.044
PL3Tree cover persistenceParcel size (2015)<0.00144.7%Persistence = −140.6 + 0.3233 Parcel size

Share and Cite

MDPI and ACS Style

Timilsina, S.; Aryal, J.; Kirkpatrick, J.B. Mapping Urban Tree Cover Changes Using Object-Based Convolution Neural Network (OB-CNN). Remote Sens. 2020, 12, 3017. https://doi.org/10.3390/rs12183017

AMA Style

Timilsina S, Aryal J, Kirkpatrick JB. Mapping Urban Tree Cover Changes Using Object-Based Convolution Neural Network (OB-CNN). Remote Sensing. 2020; 12(18):3017. https://doi.org/10.3390/rs12183017

Chicago/Turabian Style

Timilsina, Shirisa, Jagannath Aryal, and Jamie B. Kirkpatrick. 2020. "Mapping Urban Tree Cover Changes Using Object-Based Convolution Neural Network (OB-CNN)" Remote Sensing 12, no. 18: 3017. https://doi.org/10.3390/rs12183017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop