Next Article in Journal
The smashHitCore Ontology for GDPR-Compliant Sensor Data Sharing in Smart Cities
Next Article in Special Issue
Graph Sampling-Based Multi-Stream Enhancement Network for Visible-Infrared Person Re-Identification
Previous Article in Journal
Machine Learning Approaches in Brillouin Distributed Fiber Optic Sensors
Previous Article in Special Issue
CLIP-Based Adaptive Graph Attention Network for Large-Scale Unsupervised Multi-Modal Hashing Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Computer Vision Technology for Monitoring of Indoor and Outdoor Environments and HVAC Equipment: A Review

by
Bin Yang
1,
Shuang Yang
1,
Xin Zhu
1,
Min Qi
1,
He Li
1,
Zhihan Lv
2,*,
Xiaogang Cheng
3 and
Faming Wang
4
1
School of Energy and Safety Engineering, Tianjin Chengjian University, Tianjin 300384, China
2
Department of Game Design, Faculty of Arts, Uppsala University, SE-62167 Uppsala, Sweden
3
College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210042, China
4
Department of Biosystems, KU Leuven, 3001 Leuven, Belgium
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(13), 6186; https://doi.org/10.3390/s23136186
Submission received: 8 June 2023 / Revised: 1 July 2023 / Accepted: 4 July 2023 / Published: 6 July 2023
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)

Abstract

:
Artificial intelligence technologies such as computer vision (CV), machine learning, Internet of Things (IoT), and robotics have advanced rapidly in recent years. The new technologies provide non-contact measurements in three areas: indoor environmental monitoring, outdoor environ-mental monitoring, and equipment monitoring. This paper summarizes the specific applications of non-contact measurement based on infrared images and visible images in the areas of personnel skin temperature, position posture, the urban physical environment, building construction safety, and equipment operation status. At the same time, the challenges and opportunities associated with the application of CV technology are anticipated.

1. Introduction

1.1. Research Background

A comfortable indoor and outdoor environment and efficiently operating HVAC equipment are essential for human health, productivity growth, and energy savings. Traditional indoor and outdoor environment monitoring, as well as HVAC equipment condition monitoring, are limited in some way by the technical constraints of the time and space. With the technological advancements in artificial intelligence, these three fields are bound to break free from their original constraints and open up new avenues for development [1]. The research background of the aforementioned three fields is described below.

1.1.1. Indoor Environment Monitoring

A comfortable and healthy indoor environment is critical for occupant health and productivity. With the rise of the concept of “human-centered” buildings in recent years, indoor environmental control proposes starting from people’s actual needs, including the real-time measurement of people’s thermal status, and then realizing the heating, ventilation, and air conditioning (HVAC) system to meet the demand for thermal comfort and energy savings [2].
Fanger established the classical thermal comfort theory [3] in the 1970s, and researchers have since explored numerous methods for the measurement of human thermal comfort. Traditional measurement methods are divided into three categories: questionnaires, environmental measurements, and physiological measurements. The questionnaire survey method is overly reliant on personnel cooperation and has poor operability; the environmental measurement method uses environmental sensors to detect parameters such as room temperature, humidity, and air flow rate to determine the indoor environmental conditions. Although it has some operability, the sensing device’s location is fixed, and it cannot track personnel positions in real time to meet individual thermal comfort needs. As a result, physiological measurement methods are being researched further.
By measuring various physiological signal parameters and associated body responses with sensing devices, physiological measurements are used to determine the body’s thermal sensations. It has been discovered that heart rate, pulse, blood perfusion, skin temperature, body metabolic rate, electroencephalogram (EEG), surface electromyography (sEMG), and other physiological signals can all be used to calculate human thermal comfort [4,5,6,7]. Thermal comfort can also be predicted by changes in body posture and gestures [8]. Physiological measurement methods are classified into three types based on whether they come into contact with the human body: contact measurements, semi-contact measurements, and non-contact measurements. Personnel must wear instrumentation or wearable sensing devices for the first two types of methods. The device installation position and angle, and the personnel’s foreign body sensations, can all cause experimental errors, making it difficult to use in practice.

1.1.2. Outdoor Environment Monitoring

The problem of urban overheating has been exacerbated by rapid global urbanization and global warming [9]. The urban heat island effect and heatwave disasters have deteriorated urban habitats, significantly reduced the thermal comfort of urban residents, and seriously threatened the population’s physical and mental health, as well as economic development. It is critical to create a pleasant outdoor environment, improve urban livability, and boost urban vitality [10,11].
Outdoor thermal comfort is the result of a complex interaction between the urban microclimate (air temperature, humidity, wind speed, solar radiation, and so on) and individual physiological (age, gender, physiological activity, and so on) and psychological factors. Multiple aspects of the natural and artificial urban environment influence pedestrian comfort under the stimulation of multiple senses, such as thermal sensations, vision, hearing, air quality [12,13], and the activities of others [14,15,16]. In the age of big data, the proliferation of navigation and positioning devices, mobile devices, and mapping services has resulted in new types of image geodata. Images depict the urban physical space from various perspectives, thereby assisting quantitative studies of the urban environment.

1.1.3. HVAC Equipment Monitoring

Fault detection and diagnosis (FDD) technology first appeared in HVAC and building engineering in the late 1980s. The majority of the research focuses on the core equipment and piping of the refrigeration plant room. The core equipment includes chillers, chilled water pumps, cooling water pumps, water collectors, collectors, and heat pumps. The piping includes chilled water circuits, cooling water circuits, etc. Typically, FDD technology detects and diagnoses common faults by measuring the temperature or pressure and thermodynamic relationships at various locations in the system, which can effectively extend the lifetimes of equipment and components, stabilize room temperatures, and improve the building’s energy efficiency. Traditional FDD methods in HVAC fall into three broad categories: quantitative model-based methods, qualitative model-based methods, and process history-based methods. Quantitative models are developed based on good physical or engineering principles, but the calculations are more complicated; qualitative models are easy to develop and apply, but they rely on the expertise of developers, and certain rules may not apply when the system is complex. Process history models are a type of black-box model built when one is unfamiliar with the physical characteristics of the system, although the development process is not difficult to implement.
In the traditional sense, HVAC system operational data are structured data from the set of building automation systems (BASs). The use of contact measurement methods such as temperature and humidity sensors to obtain operating data such as the temperature and humidity of equipment or ducts is simple, but it is difficult to avoid impacting the normal operation of the system and creating errors in the measurement data, which affect the fault diagnosis results. As a result, the development of a non-invasive measurement method based on image signals to diagnose and solve faults quickly and accurately would be a breakthrough for FDD in the HVAC field.

1.2. Article’s Contributions

Based on the research background presented above, it is known that, in recent years, AI fields such as CV, machine learning, IoT, and robotics have been developed and their applications have been expanded. Non-contact measurements using CV technology and AI algorithms have seen significant advances in three areas: indoor environmental monitoring, outdoor environmental monitoring, and HVAC equipment monitoring. In this review, relevant work in these three fields over the past few years is comprehensively summarized; the main applications of non-contact measurement based on CV technology are presented; and an outlook on the challenges faced in its development, as well as its future development, is elaborated in order to offer suggestions for valuable future research directions.
To achieve these goals, this paper presents a new organizational framework as follows. First, the research background is outlined. Second, the methodology of the review and pertinent materials are summarized. The third section discusses the field of indoor environmental monitoring, the pertinent techniques for non-contact measurement, and their application in two scenarios: sleep state and on-demand ventilation. The following section describes the research conducted on the application of CV techniques and machine learning algorithms to outdoor environmental monitoring. In addition, CV techniques are combined with robotic automation technologies, with an emphasis on their application to building construction safety. Section 5 describes the condition monitoring of HVAC equipment using CV techniques. To illustrate the precision of non-contact visual intelligent monitoring, the research content of CV technology based on visible images applied to two phenomena, heat pump frost and heat exchanger condensation, is described. The conclusions of this paper address the use of new technologies in indoor environmental monitoring, outdoor environmental monitoring, and HVAC equipment monitoring, as well as their possible combinations and research opportunities. The logical framework for the research presented in this paper is shown in Figure 1.

2. Review Methodology

In this study, a content-analysis-based literature review methodology was used. The specific process of the literature search and selection was as follows.

2.1. Literature Search

The literature research was conducted using Google Scholar, Web of Science, Science Direct, and keywords (Table 1). To ensure the relevance and high quality of the literature, the keywords and Boolean operators “AND” and “OR” were used together to conduct a comprehensive search for publications related to the fields of indoor environmental monitoring, outdoor environmental monitoring, and HVAC equipment monitoring based on computer vision technology. In addition, the references in the search results were scrutinized. Although the percentage was relatively small, strongly relevant literature with milestones appearing in the references was included in the review, even if it did not fall within the search time and English language limits; see Table 1.

2.2. Selection Criteria

Figure 2 depicts the screening and adoption of the literature. The procedure was as follows. The literature was screened using the following criteria after excluding non-English, incomplete, or unpublished material:
  • Considering the influence of individual factors such as the subjects’ physical health status, gender, age differences, and so on;
  • Ensuring the stability of the thermal environment in which the subjects are located during the experiment;
  • Combining sensing equipment and CV technology to adequately capture and compare climate parameters, subjective evaluations of subjects’ comfort, and objective physiological parameters;
  • Simultaneous acquisition of frost dew visual characteristics and equipment operating parameters using a visualization lab bench based on cameras and sensing equipment to compare experimental results;
  • During the experiment, the effect of environmental factors such as light and angle on the frost dew image of the equipment can be weakened by the use of fill lights;
  • The level of condensation on the equipment can be lowered by using fill lights.
Figure 2. Flow chart of the literature search and selection process.
Figure 2. Flow chart of the literature search and selection process.
Sensors 23 06186 g002
The literature was screened, and its quality was determined using the Physiotherapy Evidence Database (PEDro) scales. Each article in PEDro corresponds to the subject criteria, and each article is worth one point. As shown in Figure 2, each of the three fields was counted separately, yielding a score of 7, 7, and 6, respectively. As a result, the scores of 6, 6, and 5 were chosen as critical points by three reviewers. The review included literature with scores that exceeded the thresholds. Following further discussion, controversial literature was highlighted.

3. Indoor Environment Monitoring

To monitor a person’s thermal state, a non-contact measurement method based on CV technology can effectively compensate for the shortcomings of traditional measurement methods and achieve personalized, high-precision, real-time non-contact measurements. The majority of current non-contact measurements are based on skin temperature, human posture, and personnel occupancy.

3.1. Non-Contact Measurement of the Thermal State of Indoor Personnel

3.1.1. Based on Skin Temperature

Skin temperature is closely related to a person’s thermal state and is an effective indicator for the objective evaluation of a person’s thermal sensations and thermal state [17]. Non-contact skin temperature measurement is currently possible using thermal and visible images acquired by thermal infrared cameras and optical cameras, respectively, and image analysis using infrared imaging technology and optical technology, respectively.
  • Infrared thermal-image-based skin temperature measurement.
Infrared imaging technology is a non-contact temperature measurement technology that uses thermal images. Thermal infrared (TIR) cameras can directly measure the skin temperature of exposed areas such as the face and hands to monitor personnel’s thermal status and provide a foundation for HVAC system regulation [18]. Infrared imaging can also be used to assess physiological parameters such as the heart rate, blood perfusion, and respiratory rate [17].
Along with advancements in infrared imaging technology, low-cost TIR cameras that are compact, easy to install, and privacy preserving have emerged [19]. TIR cameras can be used to measure the temperatures of different regions of the face, such as the forehead, nose, ears, and cheeks, and predict thermal sensations [20,21,22]. Initial IR imaging techniques were not very mature and required the artificial delineation of regions of interest (ROI) in thermal images, followed by temperature extraction, which was less tractable [18,23]. Researchers have further developed automated methods [19,24], but, while they can automatically locate faces as ROI regions from thermal images, these methods are heavily influenced by the person’s posture, motion, and facial angle. When a person moves, the face assumes a non-orthogonal angle, resulting in inaccurate ROI region detection.
To improve the accuracy in locating facial ROI regions, researchers have proposed combining infrared imaging techniques, computer vision, and machine learning to predict a person’s thermal status. Aryal et al. [25] used a combination of thermal images captured by a TIR camera and visible light images captured by a regular camera. The detected facial region in the visible image was used to locate the facial skin region in the thermal image and extract the facial skin temperature. Kopaczka et al. [26] combined algorithms for face feature detection, emotion recognition, face frontalization, and analysis to further process the infrared face image. Ghahramani et al. [27] used the same method as Aryal et al. to obtain the entire facial skin temperature. However, because only the absolute temperature is measured, calibration drift is unavoidable when aligning visible and thermal images. The accuracy of the developed thermal comfort prediction model was 65.2%. To avoid the effects of these errors, He et al. [28] measured both facial and hand temperatures and predicted the person’s thermal state using a random forest model. It was discovered that the temperature variables were, in order of importance, the cheek temperature, hand temperature, and nose temperature. The absolute temperature or the temperature difference between the different parts of the body can be used to predict the thermal states of personnel more accurately by combining cheek and hand temperature statistics with the nose temperature. After two independent field studies and laboratory research data validation, the model prediction accuracy was found to be approximately 70%, and the model had some adaptability and validity.
To overcome the effects of a person’s posture, movement, and angle, some researchers have considered combining thermal images captured by a TIR camera and red–green–blue–depth (RGB-D) images captured by a Kinect depth camera [29,30]. Cosma et al. [31] used RGB-D data to identify different body parts (head, torso, hands, shoulders, etc.) and combined thermal images to extract skin and clothing temperatures; they then analyzed the data using four machine learning algorithms: support vector machine (SVM), a Gaussian process classifier (GPC), a K-nearest-neighbor classifier (KNN), and a random forest classifier (RFC) [32]. According to the research, the difference in skin and clothing temperatures can be used to predict thermal sensations. However, the high-cost TIR cameras and Kinect cameras used in some studies raise the cost of the equipment and reduce the method’s scalability. In fact, it would be more convenient, efficient, and cost-effective to acquire human thermal physiological signals directly from visible light images captured by optical cameras.
  • Skin temperature measurement based on visible light images.
Optical techniques based on visible images have gradually been applied to non-contact human skin temperature measurement in recent years. Methods such as photoplethysmography (PPG) and Eulerian video magnification (EVM) are common.
PPG is a low-cost non-contact optical technique for the measurement of subtle variations in blood flow [33]. Jung et al. [34] proposed a method to infer a person’s thermal status based on subtle changes in the skin PPG signal amplitude extracted from facial images. To reduce interference with the PPG signal, the method combines independent component analysis and adaptive filtering into a single framework. A positive correlation between the skin temperature and thermal sensation was obtained after conducting experiments on 15 subjects, and the validity of thermal sensation analysis based on visible images was confirmed.
EVM is a technique for visual micro-variation magnification [35]. It has been widely used in both structural inspection and medical fields as a CV technique to observe subtle changes in ROI regions in visible images [36]. The EVM technique was first applied in the field of thermal comfort measurement by Jazizadeh et al. [37]. Jazizadeh designed a framework for the identification of a person’s thermal state using the human thermoregulatory mechanism and the EVM algorithm [38]. Subjects working in front of a computer in the experiment were subjected to thermal stimuli at 20 °C and 30 °C, as shown in Figure 3. The camera captured facial images, which were then processed by the EVM algorithm to detect subtle changes in blood flow and identify the state of regulation of the body temperature and the thermal comfort of the human body, which was then fed back to the HVAC system for automated regulation.
Changes in skin temperature, according to the body’s thermoregulation mechanism, cause blood vessels to dilate/contract, and the skin color then undergoes subtle changes that are imperceptible to the naked eye. Such subtle changes become visible after image magnification by the EVM algorithm. As a result, Cheng et al. [39] proposed a CV-based non-contact human skin temperature measurement technique (Figure 4). They chose young East Asian women as subjects for hand thermal stimulation experiments, and hand images were collected after the hands were stimulated with warm water at 45 °C for 10 min. The EVM algorithm was used to analyze skin color saturation (Figure 5), and a linear relationship between skin color saturation and skin temperature was established using a deep convolutional neural network (DCNN), which predicted thermal comfort. The experimental results demonstrated the validity of the developed individual saturation–temperature (ST) model, with median absolute errors ranging from −0.10 °C to 0.06 °C. Cheng et al. [40] combined EVM and deep learning to create a subtleness magnification and deep learning (NIDL) model and a partly personalized ST model (NIPST). Using 1.44 million sets of hand skin feature data as the dataset, the accuracy of the NIDL model applied to non-contact measurements of Asian females was validated with a mean error of 0.476 °C and a median error of 0.343 °C. However, the preceding study did not take into account individual differences, and, to address this, Cheng et al. [41] proposed a non-contact skin temperature measurement method based on the skin sensitivity index (SSI) and deep learning. The significance of SSI in this deep learning framework was validated using the above hand image dataset, demonstrating that SSI is an excellent high-weight parameter.

3.1.2. Based on Human Position Posture

When people are thermally uncomfortable, they often adopt unusual postures or movements, according to research. Human position and posture information can be used to forecast thermal comfort. Human posture recognition technology, whether for single [42,43,44] or multi-person [45,46] recognition, has advanced significantly in recent years, with applications in somatic games, healthcare, and other fields [47,48]. The human skeleton keypoints model [49] is a deep neural network (DNN)-based algorithm that can recognize a moving human body and capture person localization information from a distance. The OpenPose algorithm was used by Cao et al. [50] to advance the field of multi-person location pose recognition [51]. To improve the recognition accuracy, OpenPose learns image features and image-related spatial models. Deepercut [52,53] and other position pose estimation methods strongly support the methodology of the non-contact measurement of a person’s thermal state based on their position pose. Microsoft’s Kinect, a 3D body sensing camera, incorporates functions such as motion capture and image recognition. Meier [54] used Kinect to capture and define four thermally relevant postures, as well as to calculate the corresponding thermal comfort index (TCI), which strongly validated the relationship between human posture and thermal comfort (Figure 6). Kinect also has the ability to predict human thermal sensations and metabolic rates. Because Kinect is not open-source and is protected by copyright, the OpenPose algorithm combined with human skeleton keypoints technology is frequently used in research.
Xu et al. [55] extracted thermal adaptation behavioral actions to create a personal thermal demand (PTD) prediction model based on a general camera. Experiments were conducted to demonstrate the accuracy of the proposed model framework for action classification and thermal demand prediction. In a multi-person office setting, accuracy of 91% was achieved. Liu et al. [56] proposed a method to obtain 3D body landmark locations by combining 2D keypoint data from OpenPose and an RGB-D camera. Wang et al. [57] designed and validated an indoor positioning system (CIOPS-RGBD) based on an RGB-D camera. The system employs OpenPose to acquire keypoints of the human body from multiple perspectives, fuse depth data for 3D reconstruction, and predict the person’s position and posture in real time. Experiments show that CIOPS-RGBD adapts well to densely populated complex indoor scenes and improves the indoor environment creation system. Yang et al. [58] proposed a non-contact measurement method that is based on an RGB camera and a human skeleton keypoints model. As shown in Figure 7, 12 thermally uncomfortable postures were defined, and the proposed method’s accuracy in predicting thermal comfort was cross-validated using a questionnaire. Although the human skeleton keypoints technique facilitates the development of individual thermal comfort models, the human skeleton keypoints model has significant technical limitations. In fact, thermally uncomfortable postures can be caused by factors unrelated to cold/heat sensation, and current posture definitions do not yet cover all hot and cold postures, allowing for misclassification.
A passive infrared (PIR) detector is a sensor that detects the infrared radiation that an object emits or reflects [59]. It is well suited for indoor personnel location because it can absorb human infrared radiation that is invisible to the naked eye (human radiation is primarily concentrated in the wavelength range of 9–10 µm) [60,61]. It is now widely used in the surveillance field due to its small size, low power consumption, high sensitivity, low price, and wide detection range.
The PIR sensor can send the personnel location information as a feedback signal to the HVAC system’s control unit, which can then control the background air conditioning system. The air conditioning system adjusts the operation mode based on the person’s occupancy, saving a lot of energy. PIR sensors are frequently used in conjunction with other technologies because they cannot recognize stationary human bodies. Other types of sensors and PIR sensors are usually responsible for separate operational tasks and must be triggered at the same time in order to activate the control signal [62]. According to studies, the use of this technology can increase energy savings by up to 30% [63].

3.2. Application

3.2.1. Initial Exploration of Sleep State Monitoring

Sleep quality is important for human health and can be influenced by a number of factors such as health, mood, and sleep environment. Numerous studies have demonstrated that the indoor thermal environment has a significant influence on sleep quality.
There are two types of sleep state monitoring methods available today: contact and non-contact. Traditional contact measurement techniques have numerous flaws. The questionnaire method, which relies on the subject’s sleep memory to assess the previous day’s sleep, is much less reliable and accurate; wristband sleep monitors cannot obtain information about sleepers’ sleep cycle distribution; and polysomnographs require instrumentation components that interfere with normal sleep, causing the “first-night effect” and reducing the measurement accuracy. Non-contact methods are further classified as auditory-based and visual-based. The auditory-based method is not very practical because it has strict requirements for the quietness of the sleep environment. As a result, the visual-based method for the detection of the thermal comfort of human sleep has received a lot of attention.
The vision-based method for sleep thermal comfort detection is advantageous for the collection of more sleep-related data. Many movements occur during sleep, such as eye movements, rolling over, jaw movements, leg tremors, eye movements, and so on. These movements become sleep information and are used in the medical community to assess sleep quality. Researchers collect sleep image/video data and extract sleep-related information using the Eulerian video amplification technique, the human skeleton keypoints model, and machine learning algorithms to determine sleep quality. Peng et al. [64] proposed a multimodal sensing system that integrates visible images, thermal images, and heart rate signals. The system classifies the multimodal signals using a support vector machine (SVM) and fuses the multimodal outputs together to infer the sleep state. The results show that the proposed system is successful in distinguishing between sleep and waking states. Choe et al. [65] created an automatic videosomnography (VSG) method that models the relationship between human head movements and sleep states using machine learning. A number of non-contact posture measurement methods that are highly instructive for sleep state monitoring have emerged in recent years. Mohammadi et al. [66] used a TIR camera in conjunction with a deep learning algorithm to measure sleep posture automatically. The TIR camera was used to capture realistic sleep thermal images of 12 subjects in a thin blanket situation, which were then fed into a deep learning network to classify the four delineated sleep postures. According to the experimental results, ResNet 152 had the highest classification accuracy of more than 95% among the seven deep networks tested. Piriyajitakonkij et al. [67] developed an ultra-wideband (UWB) radar-based method for the detection of sleep states. To improve the detection accuracy, the research process employs deep learning algorithms to classify the sleep pose and fuses the time-domain–frequency-domain signals via the multi-view learning (MLV) method. Despite their increasing computational complexity, these methods do not significantly improve the accuracy of sleep pose detection.
Cheng et al. [68] proposed a novel vision-based non-contact method for the detection of human sleep thermal comfort. Based on 438 valid questionnaires, the method defined 10 thermal comfort sleep postures, as shown in Figure 8. The thermal comfort sleep posture dataset was created by collecting data from 2.65 million frames of sleep postures in their natural sleep state. The basic framework and model of human sleep posture detection algorithm were constructed using the residual idea and long short-term memory (LSTM) algorithm based on the large amount of data collected. The human skeleton keypoints technique was used to determine the person’s sleep posture, and the video image processing technique was used to obtain the quilt coverage. The results showed that the proposed sleep thermal comfort detection method had average accuracy of 91.15%, as well as obvious robustness and effectiveness.
In order to monitor more types of sleep action postures in the future, the algorithm and the detection performance must be improved. In the future, the intelligent regulation of the sleep thermal environment will be studied further, and the HVAC system parameters will be automatically adjusted in real time to avoid overcooling/overheating supply and meet the demand for human sleep thermal comfort. The non-contact sleep thermal comfort monitoring method will be valued and applied in the field of elderly care as China’s population ages.

3.2.2. Ventilation on Demand

On-demand ventilation can be achieved in an energy-saving ventilation strategy via the non-contact measurement of personnel position postures to ensure energy savings and comfort. Wang et al. [69] proposed a personnel positioning system based on a human skeleton keypoints model, as shown in Figure 9. This system detects room operation patterns in a multi-purpose lecture hall (classroom/meeting room) by identifying and estimating personnel occupancy and regulates the HVAC system accordingly. Experimental results show that in small and medium-sized indoor spaces, the system can complete image acquisition, extraction, 3D reconstruction, and data fusion in 1.5 s, as well as performing real-time human positioning and pose recognition. The environment of large open indoor spaces, on the other hand, is more complex, and the tracking and positioning of indoor occupants, as well as the real-time regulation of air conditioning systems, face new challenges. Cui et al. [70] proposed an intelligent zonal ventilation control strategy based on people’s occupancy situation based on this. AS-DA-Net is used in the strategy to identify the number of heads in video images, predict the occupancy density of each partition, detect occupancy dynamically, and automatically regulate the air supply volume of each partition. The experimental results show that the proposed scheme for large open indoor spaces is effective. It considers the balance of personnel thermal comfort, indoor air quality, and energy consumption, in comparison to the existing CO2 concentration-based ventilation control.
It is also possible to change the air direction, air speed, and air volume parameters of the air conditioning system in real time to meet the cooling/heating needs of personnel by monitoring their thermal sensations in the human-centered ventilation strategy. Figure 10 depicts a micro-environmental air supply device with non-contact automation control. The subject working in front of the computer will receive three types of temperature measurements at the same time during the experiment. The RGB camera detects the subject’s thermal discomfort or cold discomfort posture while also capturing the subject’s facial image and applying the EVM algorithm to calculate the facial temperature; the TIR camera measures the facial temperature; and a semi-contact measurement device, such as a thermometric bracelet, measures the skin temperature. In the integrated development board, the results of the three types of temperatures are compared, and the temperature measurement results are used to predict the subject’s real-time thermal sensations.
However, due to the small exposed area of human skin, the EVM algorithm will produce temperature measurement errors; the infrared temperature measurement technology will produce temperature measurement errors due to a redundant light source or the shaking of the personnel’s body; the long-term wearing of the bracelet will cause the personnel to experience discomfort, and those who cannot wear it continuously will cause missing data. To avoid system errors, a voice feedback device is programmed to ask personnel about their thermal sensations, compare them to the predicted heat sensations, and control the end device based on the result of this check.
The speed of data extraction, analysis, and signal transmission in non-contact measurement based on CV technology is faster than the speed of mechanical equipment (valves, fans, etc.). The mismatch in operational processing speed impedes the practical application of on-demand ventilation technology and non-contact measurement technology. As a result, Zhai et al. [71] proposed combining an energy-efficient fan with a higher air conditioning background temperature without changing the room setpoint. The energy-saving fan’s adjustment speed corresponds to the processing speed of the non-contact measurement technique, avoiding the limitation of the air conditioning system’s slow adjustment speed. However, the room’s size, its irregular shape, and the mutual occlusion of people can all lead to CV technique misjudgments.
The rational use of natural ventilation in an on-demand ventilation strategy can balance energy efficiency and thermal comfort needs. The degree of window opening affects the efficiency of natural ventilation. There are two types of methods for the monitoring of window status based on common cameras: traditional and novel. Traditional window condition monitoring relies on image processing techniques to extract the intensity of the image pixel distribution. Bourikas et al. [72] proposed and validated a camera-based method for the evaluation of the window opening type and degree based on window pixel intensity distribution images. The measurement’s precision was greater than 90%. However, in practice, the fixed location of the camera would limit the number of façade monitoring stations and thus the sample data. Zheng et al. [73] calculated the percentage of window opening based on the intensity distribution of window pixels. To obtain a larger sample, the experimental procedure was carried out using large-scale data sampling. The sliding windows of a hospital in Nanjing were studied with a recognition error of approximately 8%. Based on CV techniques, Luong et al. [74] developed a method for the monitoring of the condition of building façade windows. The method uses image segmentation techniques to automatically segment the individual images of each window with accuracy of 89%. The limitation is that it is only applicable to shaded windows, and the manually adjusted window status thresholds are not scalable. In fact, window monitoring methods based on the pixel intensity distribution are easily influenced by light levels and weather conditions. As a result, deep learning algorithms have been introduced into the field as novel window state monitoring methods. Tien et al. [75] developed and validated a deep learning method capable of automatically identifying individual window states in real time. The method’s accuracy was 97.29%, demonstrating the advantages of deep learning for automatic window state recognition. Sun et al. [76] proposed a method for the automation of the real-time monitoring of window states in severe cold regions, combining CNNs and image processing techniques. Experiments showed that in the transition season, severe cold regions prefer large window opening angles, and the window opening probability in the southeast direction is greater than in other directions. The method is highly scalable and can be combined with building energy consumption and other factors to facilitate the analysis of multiple application scenarios. Window monitoring using TIR cameras can overcome the limitations of optical cameras and is especially useful in observing window states at night. Chen et al. [77] created a remote sensing method based on TIR cameras to identify indoor temperatures. The results demonstrated that full-opening window IR images could clearly quantify the indoor temperature in the heating, excessive, and cooling states. The absolute deviation between the measured infrared temperature and the true value of the temperature at different heights was 0.5 °C for the heating and excessive states, and the deviation was greatest for the cooling state. Future research could look into the differences between daytime and nighttime window opening patterns during the transitional season.

4. Outdoor Environment Monitoring

4.1. Urban Environmental Monitoring

Urban environmental monitoring requires the collection of both subjective human perception and objective environmental data. There are two categories based on the data monitoring methods: field measurement methods and image measurement methods (Figure 11).

4.1.1. Field Measurements

Traditional outdoor thermal comfort measurement is a field survey method based on questionnaires and outdoor parameter measurements. The subjective perceptions of subjects are obtained through questionnaires, and environmental and physiological parameters measured by sensing devices are used to obtain thermal comfort evaluation indices such as PET, PMV, UTCI, SET*, and so on [78], which are mapped to human thermal sensations for outdoor thermal comfort modeling. The questionnaire method is simple and straightforward, but it can interfere with pedestrian behavior patterns. The microclimatic environment during the survey can change, which can affect the accuracy of the results [79]. The outdoor parameter measurement method is primarily based on a network of weather stations and various sensing devices to obtain microclimatic parameters such as the temperature, relative humidity, barometric pressure, wind speed and direction, solar irradiance, and rainfall [80,81,82]. The microclimatic conditions, particularly temperature, have a large impact on outdoor thermal comfort [83,84]. There are two types of weather stations: stationary and mobile [85]. Although fixed weather stations allow long-term observations of meteorological data [86,87], the amount of information can be limited by the location and number of weather stations, making it difficult to adequately display spatial variations in heat, and the equipment maintenance costs are high [88]. Mobile weather stations compensate for the fixed type by installing sensing equipment in a vehicle at a height of 1.5 m from the ground (the average height of the human heart) and collecting data in a mobile manner based on a dedicated vehicle [89,90]. However, the method is still constrained by the application scenario.
To overcome the limitations of mobile measurements in vehicles, portable/wearable sensing devices with strong communication capabilities and low costs are gaining popularity [91]. Wearable devices that integrate miniature weather stations and embedded sensors with helmets and backpacks can continuously monitor spatial and temporal changes in information such as microclimatic parameters [92,93], physiological parameters [94], geographic locations [95], concentrations of various pollutants [96,97], and noise [98] in real time, and they transmit and store the data via wireless networks. On this basis, Kulkarni et al. [99] integrated vision systems and machine learning algorithms into an Internet of Things (IoT) weather sensing system and proposed MaRTiny, a new low-cost computer vision biometeorological sensing device. As shown in Figure 12, the meteorological system of this device can passively collect microclimatic data and estimate the mean radiation temperature (MRT) using the SVM algorithm. The vision system employs pedestrian detection (YOLOv3) and shadow detection algorithm (BDRAR network) models based on the NVIDIA Jetson Nano development board, counts the number of people in the shade and sunlight using the camera, and uploads the data to Amazon Web Services (AWS) servers. The study’s findings show that the root mean square error (RMSE) of the MRT estimated based on machine learning is reduced from 10 °C to 4 °C in the meteorological system, and the accuracy of pedestrian detection is 95% and that of shade detection is 80% in the vision system. The observations of the meteorological system and the vision system are consistent. The data collected by the MaRTiny device can effectively analyze the impact of the urban microclimatic conditions on people’s behavioral patterns in public spaces (e.g., the number of people holding umbrellas and taking transportation) to guide the management and design of urban greenery and improve urban thermal comfort.
Field measurement methods have long data collection update cycles, are limited in the urban areas that they can cover, are time-consuming and labor-intensive, and are not appropriate for large-scale applications. As a result, building IoT systems in the field of urban environmental monitoring with low-cost and integrated sensor technologies and artificial intelligence has become an important means of shifting from traditional to new types of observation.

4.1.2. Remote Sensing Image Measurement

Remote sensing images are those obtained by photographing or scanning the Earth’s surface with remote sensors installed on remote sensing platforms. The remote sensing images are processed or recoded to produce remote sensing images that can be used as the foundation for outdoor thermal environment and thermal comfort studies. Remote sensing technology is classified into infrared thermal remote sensing, visible light remote sensing, LiDAR, multispectral remote sensing, and so on based on the electromagnetic wave spectral band range.
Satellites are primarily used in infrared thermal remote sensing technology to acquire thermal images and calculate the land surface temperature (LST) (Figure 13) [100,101]. Satellites such as Landsat and NOAA were successfully launched in the 1970s, and they began to provide data support for urban observation, climate and environment research, and other scientific research fields. Thermal images were used by researchers to measure LST and initially explore the urban thermal environment [102]. The combination of thermal images and measured meteorological data is commonly used in the assessment of urban heat islands [103,104], urban heat flux [105,106], urban parameters and other urban scale issues, and the thermal environment. An advanced, very-high-resolution radiometer (AVHRR) is installed on NOAA satellites. Stathopoulou et al. [107] proposed a method for the estimation of the discomfort index (DI) from thermal images based on this. When compared to the DI values calculated from meteorological data, it was discovered that the DI values could be effectively estimated using thermal images with a resolution of 1.1 km to measure human thermal sensations. In terms of spatial details, Xu et al. [108] improved this estimation method. It was demonstrated that high-precision DI images with a resolution of 10 m could distinguish three types of thermal discomfort, reflecting spatial differences in the urban environment in terms of building, vegetation, and water content. Mijani et al. [109] used Tehran, Iran, as their study site to propose and validate a least squares method (LSM) for outdoor thermal comfort modeling based on thermal remote sensing images and climate data. As inputs, the model takes urban environmental parameters and human physiological parameters, and it outputs DI values. The correlation coefficient between the true and predicted DI values exceeds 0.85. Although thermal remote sensing images can provide urban-scale LST, the data are constrained by the acquisition time of the satellite. Furthermore, the surface temperature derived from remote sensing data represents the temperature of the top of the tree canopy, building roofs, and the ground, which cannot fully represent the actual thermal stimuli experienced by street pedestrians [108]. As a result, researchers [110,111,112,113] have integrated infrared sensors into devices such as air vehicles, roof viewpoints, ground observation devices, and smartphones to gradually and accurately scale temperature collection from the urban scale to the neighborhood scale, building scale, and microscale [114]. Zhao et al. [115] created a three-dimensional thermal imaging technique. This technique observes and assesses the outdoor spatial thermal environment at the street level by using 2D thermal images acquired by a TIR camera mounted near the ground, in conjunction with 3D models acquired by a UAV to generate 3D thermal images and extract MRT values. Furthermore, a visualization tool integrating thermal images, IoT, and a digital twin platform is being developed to monitor urban environmental data [116].
Visible light remote sensing technology allows satellites, aircraft, and other aerial vehicles to observe the ground from above (Figure 14). It is capable of displaying 2D urban features such as roads and bridges, buildings, land use, vegetation greenery, and geomorphology [117,118]. It has more structured and consistent data than traditional images [119,120]. Images of visible light remote sensing are widely used for the analysis, evaluation, and visualization of urban greening and shading levels at macroscopic scales [101,121]. Spatial remote sensing technology has been steadily improving for decades. Combining CV techniques and deep learning algorithms to process visible images can automate image classification [122,123], semantic segmentation [124], and scene parsing [125], resulting in lower labor costs. The green space percentage, green space/building area ratio, tree density, shade coverage, and other indices are commonly used [126,127,128]. Hong et al. [129] used GIS to extract the green space density and pavement from visible light images using deep learning and image processing techniques. Although visible light remote sensing images are macroscopic and fast, they only capture one perspective, lack elevation spatial information, and are unable to capture urban details at the street level. Furthermore, the cost of acquiring high-resolution remote sensing data is high. Multiple data fusion becomes a new analysis method in extracting urban morphology, building information, land use types, and other issues. To analyze urban shading, satellite images and geographic information system (GIS) data can be combined. Hu et al. [130] used LiDAR technology in conjunction with near-ground photography to extract tree canopy lines as a new index to quantify the street tree morphology. The LiDAR technique, on the other hand, is more expensive and unsuitable for large-scale research applications. Radar data decoding and compilation is also time-consuming and tedious. Point cloud data from synthetic aperture radar (SAR), light detection and ranging (LiDAR), multispectral remote sensing, and visible light remote sensing techniques are frequently combined with geospatial data or street view images [131,132,133].

4.1.3. Street View Image Measurement

The big data era; the maturity and popularity of navigation and positioning devices, mobile devices, and mapping services; and the rapid development of sensing and digitization technologies have resulted in a new type of geographic big dataset (Figure 15), street view imagery (SVI) [134].
Street view images compensate for satellite images’ deficiencies by recording detailed three-dimensional profiles of city streets from microscopic and pedestrian side view perspectives. Image data provide data support for quantitative studies of the urban physical environment due to their wide coverage, high resolution, large data volume, and low cost. SVI is classified into panoramic images and crowdsourced images, according to data sources. Panoramic images are primarily captured by map service providers who use street view vehicles to traverse the city road network, collecting 360-degree panoramic visual information. Google Street View (GSV), Tencent Street View (TSV), and Baidu Street View (BSV) are all well-known service platforms [135,136]. Mapillary, KartaView, and Apple Map are examples of popular crowdsourcing service platforms. Individual web users capture and provide crowdsourced images. Because of the high quality and perspective of the images captured by users, they are frequently used as a supplement to panoramic images.
Technical support for the quantification of urban physical environments is provided by CV techniques and deep learning algorithms [137]. SVI is used for visual object recognition and the classification of scene types and attributes in physical environment quantification. The primary visual tasks of CV techniques are object detection and object segmentation. The former can identify object position and type information in an image, while the latter can classify each pixel point in an image. DCNN is the most commonly used image analysis model in deep learning, and its representative structural frameworks include AlexNet, GoogLeNet, DenseNet, and others. Deep-learning-based CV techniques can extract multi-level information such as the city geometry, building façade color, city greenery level, and sky view factor (SVF) automatically.
Related research has found that urban geometry and street greenery have an impact on the urban thermal environment and that good urban planning and design can improve outdoor pedestrian thermal comfort [138,139,140,141,142]. It has become popular to study the urban street environment using SVI to quantify the urban geometry and greening level.
  • Urban Geometry.
The formation of a street canyon in the center of a city from a cluster of high-density buildings is critical in influencing the urban microclimate and comfort. The height to width ratio, street orientation, and sky view factor (SVF) are the main geometric parameters of urban morphological structures [143]. Oke proposed the concept of the SVF, which is defined as the ratio of the pedestrian-visible sky area at a given surface point to the full-view sky area in the urban street space [144]. This value is a dimensionless number between 0 and 1 that represents the degree of openness of outdoor public spaces and also serves as an index to measure the level of shading in various street canyons [145]. As the morphology of the buildings and street trees on both sides of the street changes, so does the level of sky visibility from a pedestrian viewpoint [146]. A value of 0 indicates that the sky is completely shaded, while a value of 1 indicates that there is no shading at all [147]. Geometric methods [148], global positioning system (GPS) methods [149], simulation methods, and image methods [150] are commonly used to estimate the SVF. The image method is clearly more accurate and straightforward than the other methods [151].
A traditional method for the estimation of the SVF based on fisheye images is the fisheye image method [152]. In a street canyon, a circular fisheye lens is used to shoot squarely up to the sky, projecting the hemispheric environment onto a circular plane and capturing a 2D circular fisheye photograph. To segment the sky area and the occluded area, image processing software is used to perform processes such as binarization, contrast, and brightness adjustment on the fisheye image. The SVF is calculated as a percentage of the visible sky area [153,154]. The fisheye image method accounts for the occlusion of vegetation and other urban infrastructure, resulting in more accurate estimation results. However, the segmentation process in different types of image processing software, such as RayMan and SkyView, relies on manual operation and parameter setting, which is time-consuming and labor-intensive. The fisheye image method, which necessitates field photography, is also affected by lighting and weather conditions, making it unsuitable for large-scale studies [155].
The street-level image method is a low-cost and efficient open-source SVI-based visualization method for the estimation of the SVF [156]. Several researchers [157,158] have generated fisheye images from SVI mapping using hemispherical transformation and then estimated the SVF to evaluate solar radiation and the sunshine duration at the street level. These studies demonstrate the feasibility of estimating the SVF based on SVI, and while effective in reducing the field photography time, the manual image processing time remains lengthy. Several studies have investigated methods of automatically estimating the SVF on a large scale. Xia et al. [159] proposed the DeepLabV3 + semantic segmentation model and a deep-learning-based automatic estimation method for SVF values. The method employs a deep learning model to semantically segment SVI and generate fisheye images in order to compute the SVF automatically. The proposed method recognizes the sky at a rate of 98.62%. To recognize GSV images and estimate the SVF, Liang et al. [160] used an open-source DCNN algorithm called SegNet. Zeng et al. [156] created an SVF estimation toolbox based on SVI, using Python and the OpenCV software library. It can batch-process street view images and estimate the SVF quickly. However, the proposed method’s sky area detection is vulnerable to vegetation occlusion, such as massive tree canopies. Seasonal variations in vegetation can cause errors in SVF estimation. Gong et al. [155] chose the Hong Kong urban area as the study object and proposed a method for the estimation of the sky, tree, and building view factors (SVF, TVF, BVF) of street canyons in a complex urban environment. The method extracts street features from GSV images using the Pyramid Scene Parsing Network (PSPNet) and directly verifies the accuracy of the view factors (VF) estimated using hemispheric photography reference data. Based on this, Gong et al. [161] calculated the solar radiation intensity of street canyons using GSV images and demonstrated the close relationship between the SVF and solar radiation. Du et al. [162] developed a method to obtain the VF from BSV images and automatically estimated the sunshine duration through a similar line of research. Nice et al. [163] proposed an automated system for sky area detection based on an adaptive algorithm to better adapt outdoor images under various weather conditions. A CNN trained on 25,000 images from the Skyfinder dataset adaptively selects the best image processing method (mean-shift segmentation, K-means clustering, and Sobel filters) to improve the detection accuracy. These studies show that the low-cost, automatic, and efficient estimation of the SVF is feasible.
Further quantitative studies of the urban environment using SVI rely primarily on the extraction of data on various urban elements, such as buildings, roads and bridges, and street furniture [164]. Key features extracted by SVI in the building domain, such as the building type and number [165,166], building condition [167,168], year of construction [169,170], and floor height and number of stories [171,172], greatly enrich the building dataset. The color of a building’s façade, for example, has a significant impact on pedestrians’ visual experience and environmental perceptions. Zhong et al. [173] used deep learning algorithms to automatically extract the dominant color of the urban façade (DCUF) from BSV panoramic images. Zhang et al. [174] used a similar approach to add a building function classification module to the scheme while achieving the automatic computational recognition of urban façade colors. These studies serve as a foundation for ideas for urban planning and design, as well as for improving residents’ thermal comfort. CV techniques, deep learning algorithms combined with SVI, POI, satellite remote sensing data, and social media data have also been widely used in recent years for building energy consumption [167], electricity prediction [175], land use [176,177,178], urban functional classification [179,180], road and bridge monitoring [181,182,183], and other areas [184].
  • Urban Greening.
Urban greenery is an important part of the urban environment. Green space plants include street trees, bushes, lawns, urban parks, and other types of vegetation. Proper greenery planning and shading design can effectively regulate the urban microclimate, improve the urban thermal environment and pedestrian thermal comfort level [185,186], and improve the visual experience and psychological well-being of urban residents [187,188]. Depending on the measurement method and perspective, there are two major types of urban greenness indices. Traditional urban greenness indices such as green cover [189], the leaf area index (LAI) [190], and the normalized vegetation index (NDVI) [191] are mostly measured and calculated using remote sensing images or GIS from an overhead perspective. Because of the shooting angle, remote sensing images frequently miss shrubs and lawns beneath the canopies of trees, as well as green vegetation on building walls.
Aoki’s Green View Index (GVI) is a horizon-based index that assesses the level of greenery at the street level. The value represents the proportion of green pixels in the pedestrian’s field of view and reflects the degree to which pedestrians perceive their green surroundings. It has been demonstrated that the level of street greenness influences various dimensions, such as house prices [192], finances [193], crime rates [194], and residents’ health [195,196]. The GVI is measured using three methods: field measurement, remote sensing measurement, and SVI measurement. The field measurement method uses image processing software to manually extract green vegetation areas [197]. Street shading and greenery analysis takes time and has a limited number of sampling points, making it only appropriate for small-scale studies. Remote sensing measurements have limited accuracy [198], and remote sensing images and LiDAR data are frequently used in conjunction with SVI data to assess urban greenery [199]. With the advent of the big data era, researchers have used SVI such as GSV images and TSV images as new data sources, combined with CV techniques, to automate the extraction of greening areas and calculate GVI values to compensate for these deficiencies [200,201,202]. Waveband operation, color space conversion, image semantic segmentation, and other commonly used methods for green area extraction have been developed continuously [203]. Dong et al. [204] used Beijing as the study area to extract green vegetation areas in TSV images and calculate GVI values using image segmentation algorithms, demonstrating the feasibility of assessing the greenness of complex streets in megacities using SVI. The GVI calculation process necessitates the extraction of multiple GVI values from various locations in the neighborhood for data aggregation. The choice of aggregation method can affect the final GVI and cause calculation errors. To address these shortcomings, Kuma-koshi et al. [205] proposed an improved standardized GVI (sGVI) index based on Voronoi tessellation to improve the accuracy of street greenness evaluation. Chiang et al. [206] calculated GVI and SVF indices from GSV images and verified the consistency of deep learning and manual classification methods. Zhang et al. [207] extended the application of GVI indices and creatively proposed and validated a method to calculate the optimal GVI path. The visualization of geographic data in complex scenes was realized using the Floyd–Warshall algorithm and Osaka, Japan as an example. All of the preceding studies assess the amount of street greenery based on the GVI index for visualization, analysis, and application.
Researchers have attempted to investigate new greenery assessment indices from various perspectives in order to adequately describe the complexity of the horizontal distribution and vertical structure of street greenery. Tong et al. [208] proposed a new street vegetation structure diversity (VSD) index by combining remote sensing and street view perspectives. The difference in the amount of greenery and green structure between old and new urban areas was demonstrated using Nanjing, China as an example. SVI measurement based on CV techniques and machine learning can automatically acquire street tree features on a large scale [209,210,211]. Liu et al. [212] created an automatic street tree detection and classification model based on SVI and deep learning. To deal with the long-tail effect of street trees, the model accuracy was improved by improving the loss function of YOLOv5. In the GSV depth map, the depth evaluation method was validated for the first time using the deep learning model Monodepth2. The city of Jinan, China was chosen as a test site to obtain the tree species, distribution density, canopy structure, and coverage through visual analysis, and a city-wide tree inventory was established. Yue et al. [213] used the DeepLabv3+ algorithm to efficiently extract shadow areas from panoramic images and proposed and validated a shadow coverage index.
As an important component influencing the urban environment, pedestrian flow data are critical for human-centered urban greening design. Traditional methods of collecting pedestrian flow data include manual counting and cell phone signal counting. It is labor-intensive, has obvious disadvantages, can only be applied to specific study areas, and lacks scalability. The new pedestrian traffic collection method is an image method based on video image processing technology. To overcome the limitation of single surveillance camera coverage, Wong et al. [214] created the OSNet + BDB model, which uses images from multiple surveillance cameras as data sources and can identify pedestrian trajectories and distribution features over a large area. Tokuda et al. [215] used the R-FCN algorithm and the ResNet-101 layer residual network to train a street image dataset and count the number of pedestrians. Li et al. [216] proposed a methodological framework consisting of K-fold max variance semi-supervised learning and DeepLab v3+ (KMSSL-DL). KMSSL-DL combines machine learning and computer vision techniques to estimate the number of pedestrians using unlabeled data from high-dimensional urban features, and it uses the DeepLab v3+ model to identify street trees and plan street trees in a pedestrian-oriented manner. Predictions based on pedestrian analysis are used in scenarios such as train stations [217] and traffic intersections [218].
The proliferation of massive streetscape Images aids in the understanding of urban environments from the perspective of pedestrians, but it also has limitations. On the one hand, streetscape images are not taken at the same time, and the characteristics of the streetscape environment can be affected by different seasons and lighting conditions, resulting in measurement errors. To support the construction of smart cities, the fusion of multiple heterogeneous urban data, such as SVI, remote sensing images, and social media data, is used [219], which has higher requirements for the new generation of information technology represented by artificial intelligence (AI), Internet of Things (IoT), and digital twin (DT) [220,221,222].

4.2. Building Construction Safety Monitoring Robot

The occurrence of construction site injuries is extremely high, emphasizing the importance of construction safety management and monitoring. The most common causes of accidents can be divided into two categories: those caused by workers themselves, such as the improper use of personal protective equipment (PPE) or unsafe behavior caused by fatigue, and those caused by the role of workers and the surrounding environment, such as equipment, sites, materials, and so on. Methods for the monitoring of construction safety include both manual observation and image measurement. The mainstream image measurement is divided into 2D (acquired using surveillance cameras, fixed cameras) and 3D (acquired using RGB-D sensors, LIDAR) images based on the dimensionality of the captured images [223]. CV techniques such as target detection, target tracking, and action recognition, combined with deep learning algorithms from images or videos, can automatically monitor construction site information to ensure construction site safety and productivity. The traditional image measurement method with a fixed camera position, on the other hand, is incapable of adapting to the complex and changing environment of the construction site. The color of workers’ clothing, the color of the site lighting, and the camera position all have an impact on information acquisition. As a result, the combination of computer vision and robotic automation technology has emerged as a new trend in the construction industry.
Most of the early robots combined robotics and common construction means to replace human labor, and they are commonly used for static manual labor, where the subject position is essentially fixed, such as electric welding [224], bricklaying [225], and assembly facilities [226,227].
Mobile robots based on proximity sensors and CV technology were further developed to overcome the limitations of fixed camera positions [228,229]. Li et al. [230] created a mobile robot with an intelligent lifting system that uses the YOLOv2 algorithm to automate the lifting of large components such as prefabricated floor slabs. Kim et al. [231] created and validated a framework that can automate real-time target detection and predict trajectories in construction using a camera-mounted UAV and the YOLOv3, DNN (S-GAN) algorithm. Wang et al. [232] used a construction waste recycling robot to develop a Faster R-CNN target detection algorithm and a full-coverage path planning algorithm. To ensure worker safety while reducing material waste, this robot can monitor and retrieve nails and screws scattered on the ground in real time. Luo et al. [233] proposed a framework for intelligent pose estimation for various types of construction equipment. Both the Stacked Hourglass Network (HG) and the Cascaded Pyramid Network (CPN) models were found to be more than 90% accurate. Lee et al. [234] created an autonomous mobile robot capable of the real-time monitoring of PPE usage on construction sites. It can inspect unsafe behaviors automatically by utilizing the SLAM algorithm and YOLOv3 to achieve localization navigation and target detection functions.
All of the studies mentioned above strongly support the prospect of combining automated robots with CV technology, which is beneficial in terms of cost savings and efficiency. Currently, certain technical challenges remain to be overcome in the acquisition, training, and analysis of high-quality datasets in complex environments on construction sites.

5. HVAC Equipment Monitoring

5.1. Thermal Infrared-Image-Based Device Monitoring

Failures in equipment and components can result in abnormalities in a system’s temperature distribution. As a result, the temperature can be used to analyze the operating status of HVAC system equipment and piping, and it is one of the most commonly used indicators in the field of equipment monitoring.
TIR cameras are non-contact condition monitoring instruments that can measure an object’s temperature and dynamic changes from a distance in order to obtain thermal images of equipment or components, and they can then monitor and analyze temperature anomalies in order to alert personnel to ensure timely maintenance and prevent failures. Infrared imaging technology is maturing and is widely used for condition monitoring in a variety of fields, such as machinery monitoring, electrical equipment monitoring, civil structure monitoring, and nuclear industry monitoring (Figure 16) [234,235,236,237,238,239,240].
The HVAC system is complicated, incorporating multiple parts of the refrigeration system, electrical system, and air system, as well as a variety of equipment. The system’s faults are interconnected and affect one another, and the causes are complex and difficult to identify as a whole. Electric motor (IM) failures, mechanical failures, and thermal failures are all common in HVAC systems [241,242,243].
Infrared imaging is widely used for thermal troubleshooting and HVAC system performance monitoring. Using infrared images, researchers have monitored the heat transfer of heat exchanger or condenser fin surfaces, as well as tube walls [244,245]. To monitor the heat transfer performance of air-cooled condensers, Ge et al. [246] used infrared imaging to obtain the condenser’s overall and local surface temperature profiles. They established that the ambient air temperature, natural airflow, and surface defects all have an impact on unit performance. Based on infrared images, Sarraf et al. [247] investigated the effect of steam desuperheating on the performance of a plate heat exchanger as a condenser. The presence of superheated steam at the condenser inlet improved the local heat transfer at lower than saturated wall temperatures. Based on infrared images, researchers have investigated the effect of the number of fins, shape, height, width, and Reynolds number on heat exchanger performance [248,249,250]. The use of shorter continuous corrugated fins in genset air-cooled condensers can improve the heat transfer efficiency [251]. Choosing the appropriate number of fins improves the heat transfer performance of the steam chamber heat sink at low Reynolds numbers. Choosing a larger number of fins improves the heat transfer performance at high Reynolds numbers [252]. TIR-based equipment condition monitoring ensures that the equipment is operating normally and safely, improves system heat transfer efficiency, and lowers maintenance costs.
He et al. [253] used video and audio signals to develop a non-contact method for fault diagnosis and detection in refrigeration plant rooms based on an inspection robot. As shown in Figure 17, the inspection robot collects video and audio data with an infrared camera, a standard camera, and a microphone. First, an image of the machine room’s equipment is captured using a standard camera at a predetermined location, and the image is classified using the AlexNet convolutional neural network. If the image classification involves a dial, the dial value is read using the image morphology method to determine whether the pipeline is functioning properly. If the image classification involves a pump, the audio sensor is used to determine whether the sound is normal, and an infrared camera is used to acquire a thermal image of the pump. The optical character recognition (OCR) method is used to determine whether the pump is overheating by identifying the maximum temperature of the pump on the infrared image. The relevant experiments were carried out in an air conditioning plant room in Shanghai, China for this study. The results showed that the proposed method can detect mechanical and thermal faults in equipment and piping. However, the method is currently designed and applied primarily for pump and dial faults in refrigeration rooms, and non-contact fault diagnosis methods applicable to other refrigeration room equipment must be further developed.

5.2. Visible-Image-Based Device Monitoring

Visible light images combined with video image processing techniques can provide more visual information than infrared thermal images. The monitoring of equipment health using visible light images is a research area that has received a lot of attention in recent years. In particular, progress has been made regarding the problems of frost on the surfaces of heat pumps and condensation on the surfaces of heat exchangers.

5.2.1. Heat Pump Surface Frosting Phenomenon Monitoring

Frost has a significant impact on heat exchangers’ heat transfer performance, and research on frost has focused on the analysis of frost formation mechanisms and characteristics, the simulation of heat exchanger frost characteristics, and defrost control both at home and abroad. Air source heat pump (ASHP) systems have become a common alternative to traditional coal-fired space heating technologies in residential and commercial buildings around the world due to their energy efficiency and environmental benefits. Regular defrosting is required to maintain the safe operating performance of air source heat pumps. Various frost suppression, frost retardation, and defrosting methods have been developed in recent decades, but they are still incapable of avoiding the reduction in energy efficiency caused by incorrect defrosting operations. Energy consumption for defrosting can be significantly reduced and smart defrosting can be achieved by acquiring video or images with high-speed cameras and analyzing the frost growth status using image processing techniques, to explore more precise start–stop control points for defrosting systems (Figure 18).
Early vision studies use the frost thickness, frost coverage, and fractal dimension to quantify the degree of frosting in three dimensions: thickness, area, and density. The system regulates the defrost start–stop state based on whether or not the frost layer’s characteristic parameters reach the predetermined threshold value. In order to observe the variation in the frost layer thickness as well as frost layer growth in a circular tube under various environmental variables, Zhou et al. [254] used a CCD high-speed camera and a microscopic imaging system. The frost layer thickness was calculated using image processing techniques, and the method’s feasibility was verified by comparing the calculated and measured values. Wu et al. [255] investigated the relationship between the distribution of crystals in the frost layer and the thickness of the frost layer using a CCD camera in a parallel-flow evaporator under three different operating conditions. During the frost growth and complete growth periods, the frost crystals accounted for a larger fraction of the frost as they moved closer to the cold surfaces of the fins. Malik et al. [256] proposed a hybrid system with both monitoring and defrosting functions to monitor the evaporator frost thickness in real time and discovered that defrosting when the frost thickness reached an optimal threshold of 6 mm could reduce household refrigerator energy consumption by 10%.
The current stage of frost observation research is focused on improving the original grey value calculation method and creating new frost characteristic parameters in order to achieve more accurate and timely defrost control strategies. Yoo et al. [257] used measured data from an ASHP system to estimate the amount of frost per unit time step and the total amount of frost, to reasonably determine the best defrost start time when system performance decreased. Zheng et al. [258] proposed a new temperature–humidity–image (T-H-I) defrost control method. Non-frosting, moderate frosting, and severe frosting zones were classified using image processing techniques. To evaluate the frost degree, the frost coefficient P was introduced, and the optimal state point was determined and verified: defrosting began at P1 = 0.3 and ended at P2 = 0.05. The T-H-I method’s defrost start–stop control information is more accurate and significantly reduces the false defrost phenomenon. Miao et al. [259] improved the T-H-I method for the characterization of the frost degree in terms of thickness and structure. According to the experimental results, defrosting was performed at a frost thickness of 0.726 mm and a fractal dimension of 2.839, and the optimal state point was terminated when the fractal dimension was reduced to 2.324. The improved T-H-I defrost control strategy improved the accuracy and energy efficiency. Using the characteristic parameter F, Li et al. [260] proposed a method to quantify the degree of frost on the outdoor heat exchanger surface. A series of experiments were carried out to validate the method’s applicability in terms of the shooting angle, imaging pixels, illumination, and outdoor heat exchanger surface temperature, and it was discovered that only ambient illumination affected the method’s detection accuracy in practical applications. Wang et al. [261] proposed surface-source-compensated illumination to improve the new frost detection method, taking into account the effect of illumination variations on the image recognition accuracy. The benchmark illumination surface source was chosen, the influence of the light environment was compensated for with the frost threshold correction coefficient, and an air source heat pump image recognition and frost measurement technology based on light adaptation was developed, which overcame the influence of outdoor light environment variations on the ASHP image recognition and frost measurement. It eliminated the impact of changes in outdoor lighting on the accuracy of ASHP image recognition frost measurement and ensured the accuracy of image recognition and frost measurement.

5.2.2. Indirect Evaporative Cooler Condensation Monitoring

The indirect evaporative cooler (IEC) uses the evaporative heat absorption of water to cool fresh air (Figure 19 and Figure 20) with a simple, clean, and efficient structure that has grown rapidly over the decades [262,263,264]. In recent years, the application potential of IEC in hot and humid regions has been investigated, and the hot and humid fresh air in the channels within the primary passages can be cooled to below the dew point, producing condensation. The IEC transforms into a heat recovery device, cooling and dehumidifying at the same time, both of which save energy. With the theoretical study of the condensate film in the primary channel [265,266,267], the visualization study based on CV technology has also received attention.
The researchers created a visualization experimental bench (Figure 21) and captured images of condensation on the plate surface in the primary channel with a high-speed camera for image processing and analysis. Simultaneously, the IEC performance index was converted based on experimental data provided by the sensing equipment, and a link between condensation and IEC heat and mass transfer performance was established. Meng et al. [268] studied the IEC performance under different inlet conditions by observing the condensation phenomenon in the primary air channel of a fork-flow IEC and obtaining the turning points from no condensation to partial condensation and from partial condensation to full condensation, respectively. The overall performance of the IEC was investigated experimentally under various inlet primary air temperature and humidity conditions. Condensation can raise the outlet primary air temperature and water consumption, reduce the wet bulb efficiency, and increase the total heat transfer by releasing latent heat, according to the findings. Min et al. [269] modified the analytical model of the heat flow density by observing the drop and film condensation area coverage and quantitatively investigated the effects of the inlet primary air temperature, relative humidity, and flow rate on heat transfer by condensation in the IEC, as shown in Figure 22. The experimental results show that as the inlet primary air temperature rises, the area ratios of bead condensation (DWC) and film condensation (FWC) remain relatively stable at 0.4 and 0.6, respectively, while the total heat flow density rises slightly. The primary air relative humidity has a significant effect on the total heat flow density of the plate surface, and when the relative humidity is higher, the FWC increases while the growth rate of the heat transfer coefficient decreases. When the air flow rate is higher, the area ratio of DWC can be increased to improve the condensation heat transfer performance. It has previously been demonstrated that coating the secondary channels of the IEC with a hydrophilic coating can significantly improve the wettability and evaporation efficiency [270]. Min et al. [271] were inspired by this and coated a silicon nanophobic material on the surface of the primary air channel to investigate the effect of the coated hydrophobic coating on the cooling and dehumidification capacity, as well as the heat transfer performance of the IEC, under humid thermal conditions. Using image processing techniques, it was discovered that the hydrophobic surface promoted droplet condensation with smaller droplet diameters. Droplet size reduction and frequent droplet removal could improve the convective heat transfer of treated air flowing on the surfaces of coated panels. IEC energy savings were improved by 8.5–17.2%, indicating potential in dehumidification air conditioning applications.
To achieve two-phase flow heat transfer prediction, the field of multi-phase heat flow research has successfully combined visualization data with machine learning and deep learning. As a result, some researchers have considered combining CV techniques with artificial intelligence algorithms in the future. The dynamic properties of condensate droplets are tracked and analyzed to obtain visualization data such as the nucleation density, droplet growth rate, and average droplet diameter, and datasets are built to combine deep learning, neural networks, and other artificial intelligence algorithms, to train models to establish the link between the visualization properties and heat transfer performance [272,273,274].
The combination of advanced techniques for condensation heat transfer measurements, such as CV technology and artificial intelligence algorithms, is a dependable and cost-effective approach. It eliminates the need for a large number of sensing devices, reduces system errors to some extent, acquires visual features from condensate droplet images only, trains machine learning models and neural network frameworks, and estimates the condensation performance quickly. However, little research has been conducted on the measurement of condensation heat transfer on external surfaces, and the extraction of visualization features, particularly dynamic droplet features, requires further investigation.

6. Summary and Outlook

New artificial intelligence technologies have facilitated the widespread use of non-contact measurement methods. Three areas have seen progress: indoor environmental monitoring, outdoor environmental monitoring, and HVAC equipment monitoring.

6.1. Indoor Environmental Monitoring

A non-contact measurement method based on infrared and visible images to detect a person’s thermal state from the perspective of the human skin temperature and posture is effective. This method has also been widely used in the fields of sleep state monitoring and on-demand ventilation in recent years.
  • The algorithm’s performance in detecting more types of human posture should be improved in the future. Currently, automated quantitative observations of frost and dew condensation are limited to a lateral reflection of condensation through dew coverage.
  • The majority of current research is focused on gathering information about the indoor environment. It is necessary to consider combination with control technologies to achieve the real-time automated regulation of indoor environments based on personnel’s thermal status.

6.2. Outdoor Environmental Monitoring

The complex and changing outdoor environment can have an impact on pedestrian comfort. It is critical to create high-quality image datasets for urban environmental monitoring, particularly non-contact environmental monitoring in construction scenarios.
  • The further integration of SVI, remote sensing images with social media data, weather conditions, human posture, and many other types of heterogeneous urban data should be considered for future use based on the new generation of information technology represented by artificial intelligence (AI), Internet of Things (IoT), digital twin (DT), and inspection robots.

6.3. HVAC Equipment Monitoring

The combination of unstructured data (image and audio signals) from HVAC equipment with structured data collected by existing BASs and inspection robots enables the real-time automatic diagnosis of equipment faults.
  • To achieve more precise defrosting timing, a variety of frost suppression and frost retardation strategies and defrosting methods are used in conjunction with local conditions. While maintaining indoor thermal comfort, the defrosting process’s energy consumption is reduced, and the unit’s operation is stabilized to reduce the number of defrosts.
  • The continued development of an intelligent defrosting strategy based on CV technology to quantify the degree of frosting by inducing new feature parameters from the original image data is necessary.
  • It is important to extend the video shooting time and shoot condensation surfaces from multiple camera positions to reduce visualization experimental errors, and to create new CV algorithms that incorporate dynamic droplet features such as the droplet growth rate, shedding frequency, number of droplets merging, and number of shedding, to create more reliable condensation datasets.
  • It is also important to generate a generic condensing heat transfer performance prediction model by combining techniques such as CV and AI algorithms such as deep learning.

Author Contributions

Conceptualization, B.Y. and X.C.; methodology, S.Y.; formal analysis, S.Y., X.Z., M.Q. and H.L.; investigation, S.Y.; resources, Z.L.; data curation, S.Y.; writing—original draft preparation, S.Y.; writing—review and editing, B.Y. and F.W.; visualization, Z.L.; supervision, B.Y. and F.W.; project administration, B.Y.; funding acquisition, B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

The authors are grateful for the support of the National Natural Science Foundation of China (No. 52278119).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful for the support of the National Natural Science Foundation of China (No. 52278119).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Borodinecs, A.; Zemitis, J.; Palcikovskis, A. HVAC system control solutions based on modern IT technologies: A review article. Energies 2022, 15, 6726. [Google Scholar] [CrossRef]
  2. Zemitis, J.; Borodinecs, A.; Sidenko, N.; Zajacs, A. Simulation of IAQ and thermal comfort of a classroom at various ventilation strategies. E3S Web Conf. 2023, 396, 03005. [Google Scholar] [CrossRef]
  3. Fanger, P.O. Thermal comfort: Analysis and applications in environmental engineering. Appl. Ergon. 1970, 3, 181. [Google Scholar]
  4. Huizenga, C.; Zhang, H.; Arens, E.; Wang, D. Skin and core temperature response to partial-and whole-body heating and cooling. J. Therm. Biol. 2004, 29, 549–558. [Google Scholar] [CrossRef] [Green Version]
  5. Takada, S.; Matsumoto, S.; Matsushita, T. Prediction of whole-body thermal sensation in the non-steady state based on skin temperature. Build. Environ. 2013, 68, 123–133. [Google Scholar] [CrossRef]
  6. Choi, J.H.; Yeom, D. Study of data-driven thermal sensation prediction model as a function of local body skin temperatures in a built environment. Build. Environ. 2017, 121, 130–147. [Google Scholar] [CrossRef]
  7. Dang, Y.; Liu, Z.; Yang, X.; Ge, L.; Miao, S. A fatigue assessment method based on attention mechanism and surface electromyography. Int. Things Cyber Phys. Syst. 2023, 3, 112–120. [Google Scholar] [CrossRef]
  8. Yang, B.; Cheng, X.; Dai, D.; Olofsson, T.; Li, H.; Meier, A. Macro pose based non-invasive thermal comfort perception for energy efficiency. arXiv 2018, arXiv:1811.07690. [Google Scholar]
  9. Akbari, H.; Cartalis, C.; Kolokotsa, D.; Muscio, A.; Pisello, A.L.; Rossi, F.; Santamouris, M.; Synnefa, A.; Wong, N.H.; Zinzi, M. Local climate change and urban heat island mitigation techniques-the state of the art. J. Civ. Eng. Manag. 2016, 22, 1–16. [Google Scholar] [CrossRef] [Green Version]
  10. Mijani, N.; Alavipanah, S.K.; Hamzeh, S.; Firozjaei, M.K.; Arsanjani, J.J. Modeling thermal comfort in different condition of mind using satellite images: An Ordered Weighted Averaging approach and a case study. Ecol. Indic. 2019, 104, 1–12. [Google Scholar] [CrossRef]
  11. Wibowo, A.; Salleh, K.O. Landscape features and potential heat hazard threat: A spatial-temporal analysis of two urban universities. Nat. Hazards 2018, 92, 1267–1286. [Google Scholar] [CrossRef]
  12. Pantavou, K.; Lykoudis, S.; Psiloglou, B. Air quality perception of pedestrians in an urban outdoor Mediterranean environment: A field survey approach. Sci. Total Environ. 2017, 574, 663–670. [Google Scholar] [CrossRef] [PubMed]
  13. Zakaria, M.F.; Ezani, E.; Hassan, N.; Ramli, N.A.; Wahab, M.I.A. Traffic-related air pollution (TRAP), air quality perception and respiratory health symptoms of active commuters in a university outdoor environment. IOP Conf. Ser. Earth Env. Sci. 2019, 22, 012017. [Google Scholar] [CrossRef]
  14. Gao, W.; Qian, Y.; Chen, H.; Zhong, Z.; Zhou, M.; Aminpour, F. Assessment of sidewalk walkability: Integrating objective and subjective measures of identical context-based sidewalk features. Sustain. Cities Soc. 2022, 87, 104142. [Google Scholar] [CrossRef]
  15. Ma, X.; Chau, C.K.; Lai, J.H.K. Critical factors influencing the comfort evaluation for recreational walking in urban street environments. Cities 2021, 116, 103286. [Google Scholar] [CrossRef]
  16. Berkouk, D.; Bouzir, T.A.K.; Boucherit, S.; Khelil, S.; Mahaya, C.; Matallah, M.E.; Mazouz, S. Exploring the multisensory interaction between luminous, thermal and auditory environments through the spatial promenade experience: A case study of a university campus in an oasis settlement. Sustainability 2022, 14, 4013. [Google Scholar] [CrossRef]
  17. De Oliveira, F.; Moreau, S.; Gehin, C.; Dittmar, A. Infrared imaging analysis for thermal comfort assessment. In Proceedings of the 2007 29th Annual International Conference of The IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 3373–3376. [Google Scholar]
  18. Ranjan, J.; Scott, J. ThermalSense: Determining dynamic thermal comfort preferences using thermographic imaging. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 1212–1222. [Google Scholar]
  19. Li, D.; Menassa, C.C.; Kamat, V.R. Non-intrusive interpretation of human thermal comfort through analysis of facial infrared thermography. Energy Build. 2018, 176, 246–261. [Google Scholar] [CrossRef]
  20. Tejedor, B.; Casals, M.; Gangolells, M.; Macarulla, M.; Forcada, N. Human comfort modelling for elderly people by infrared thermography: Evaluating the thermoregulation system responses in an indoor environment during winter. Build. Environ. 2020, 186, 107354. [Google Scholar] [CrossRef]
  21. Ghahramani, A.; Castro, G.; Becerik-Gerber, B.; Yu, X. Infrared thermography of human face for monitoring thermoregulation performance and estimating personal thermal comfort. Build. Environ. 2016, 109, 1–11. [Google Scholar] [CrossRef] [Green Version]
  22. Wu, Y.; Liu, H.; Li, B.; Kosonen, R. Prediction of thermal sensation using low-cost infrared array sensors monitoring system. IOP Conf. Ser. Mater. Sci. Eng. 2019, 609, 032002. [Google Scholar] [CrossRef]
  23. Burzo, M.; Abouelenien, M.; Pérez-Rosas, V.; Wicaksono, C.; Tao, Y.; Mihalcea, R. Using infrared thermography and biosensors to detect thermal discomfort in a building’s inhabitants. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Montreal, Quebec, Canada, 14–20 November 2014; American Society of Mechanical Engineers: New York, NY, USA, 2014; p. V06BT07A015. [Google Scholar]
  24. Pavlin, B.; Pernigotto, G.; Cappelletti, F.; Bison, P.; Vidoni, R.; Gasparella, A. Real-time monitoring of occupants’ thermal comfort through infrared imaging: A preliminary study. Buildings 2017, 7, 10. [Google Scholar] [CrossRef]
  25. Aryal, A.; Becerik-Gerber, B. A comparative study of predicting individual thermal sensation and satisfaction using wrist-worn temperature sensor, thermal camera and ambient temperature sensor. Build. Environ. 2019, 160, 106223. [Google Scholar] [CrossRef]
  26. Kopaczka, M.; Breuer, L.; Schock, J.; Merhof, D. A modular system for detection, tracking and analysis of human faces in thermal infrared recordings. Sensors 2019, 19, 4135. [Google Scholar] [CrossRef] [Green Version]
  27. Ghahramani, A.; Xu, Q.; Min, S.; Wang, A.; Zhang, H.; He, Y.; Merritt, A.; Levinson, R. Infrared-fused vision-based thermoregulation performance estimation for personal thermal comfort-driven HVAC system controls. Buildings 2022, 12, 1241. [Google Scholar] [CrossRef]
  28. He, Y.; Zhang, H.; Arens, E.; Merritt, A.; Huizenga, C.; Levinson, R.; Wang, A.; Ghahramani, A.; Alvarez-Suarez, A. Smart detection of indoor occupant thermal state via infrared thermography, computer vision, and machine learning. Build. Environ. 2023, 228, 109811. [Google Scholar] [CrossRef]
  29. Metzmacher, H.; Wölki, D.; Schmidt, C.; Frisch, J.; van Treeck, C. Real-time human skin temperature analysis using thermal image recognition for thermal comfort assessment. Energy Build. 2018, 158, 1063–1078. [Google Scholar] [CrossRef]
  30. Li, D.; Menassa, C.C.; Kamat, V.R. Robust non-intrusive interpretation of occupant thermal comfort in built environments with low-cost networked thermal cameras. Appl. Energ. 2019, 251, 113336. [Google Scholar] [CrossRef]
  31. Cosma, A.C.; Simha, R. Thermal comfort modeling in transient conditions using real-time local body temperature extraction with a thermographic camera. Build. Environ. 2018, 143, 36–47. [Google Scholar] [CrossRef]
  32. Cosma, A.C.; Simha, R. Machine learning method for real-time non-invasive prediction of individual thermal preference in transient conditions. Build. Environ. 2019, 148, 372–383. [Google Scholar] [CrossRef]
  33. Ventola, C.L. Social media and health care professionals: Benefits, risks, and best practices. Pharm. Ther. 2014, 39, 491. [Google Scholar]
  34. Jung, W.; Jazizadeh, F. Vision-based thermal comfort quantification for HVAC control. Build. Environ. 2018, 142, 513–523. [Google Scholar] [CrossRef]
  35. Wu, H.Y.; Rubinstein, M.; Shih, E.; Guttag, J.; Durand, F.; Freeman, W. Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graphic. 2012, 31, 65. [Google Scholar] [CrossRef]
  36. Alghoul, K.; Alharthi, S.; Al Osman, H.; El Saddik, A. Heart rate variability extraction from videos signals: ICA vs. EVM comparison. IEEE Access 2017, 5, 4711–4719. [Google Scholar] [CrossRef]
  37. Jazizadeh, F.; Pradeep, S. Can computers visually quantify human thermal comfort? Short Paper. In Proceedings of the 3rd ACM International Conference on Systems for Energy-Efficient Built Environments, Palo Alto, CA, USA, 16–17 November 2016; pp. 95–98. [Google Scholar]
  38. Jazizadeh, F.; Jung, W. Personalized thermal comfort inference using RGB video images for distributed HVAC control. Appl. Energ. 2018, 220, 829–841. [Google Scholar] [CrossRef]
  39. Cheng, X.; Yang, B.; Olofsson, T.; Liu, G.; Li, H. A pilot study of online non-invasive measuring technology based on video magnification to determine skin temperature. Build. Environ. 2017, 121, 1–10. [Google Scholar] [CrossRef]
  40. Cheng, X.; Yang, B.; Hedman, A.; Olofsson, T.; Li, H.; Van Gool, L. NIDL: A pilot study of contactless measurement of skin temperature for intelligent building. Energy Build. 2019, 198, 340–352. [Google Scholar] [CrossRef]
  41. Cheng, X.; Yang, B.; Tan, K.; Isaksson, E.; Li, L.; Hedman, A.; Olofsson, T.; Li, H. A contactless measuring method of skin temperature based on the skin sensitivity index and deep learning. Appl. Sci. 2019, 9, 1375. [Google Scholar] [CrossRef] [Green Version]
  42. Chen, Y.; Wang, Z.; Peng, Y.; Zhang, Z.; Yu, G.; Sun, J. Cascaded pyramid network for multi-person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7103–7112. [Google Scholar]
  43. Chen, Y.; Shen, C.; Wei, X.S.; Liu, L.; Yang, J. Adversarial posenet: A structure-aware convolutional network for human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1212–1221. [Google Scholar]
  44. Pfister, T.; Charles, J.; Zisserman, A. Flowing convnets for human pose estimation in videos. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1913–1921. [Google Scholar]
  45. Pishchulin, L.; Insafutdinov, E.; Tang, S.; Andres, B.; Andriluka, M.; Gehler, P.V.; Schiele, B. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4929–4937. [Google Scholar]
  46. Insafutdinov, E.; Pishchulin, L.; Andres, B.; Andriluka, M.; Schiele, B. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In Proceedings of the 14th European Conference of Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 34–50. [Google Scholar]
  47. Vemulapalli, R.; Arrate, F.; Chellappa, R. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 588–595. [Google Scholar]
  48. Galna, B.; Barry, G.; Jackson, D.; Mhiripiri, D.; Olivier, P.; Rochester, L. Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson’s disease. Gait Posture 2014, 39, 1062–1068. [Google Scholar] [CrossRef] [Green Version]
  49. Toshev, A.; Szegedy, C. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar]
  50. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  51. Qian, J.; Cheng, X.; Yang, B.; Li, Z.; Ren, J.; Olofsson, T.; Li, H. Vision-based contactless pose estimation for human thermal discomfort. Atmosphere 2020, 11, 376. [Google Scholar] [CrossRef] [Green Version]
  52. Güler, R.A.; Neverova, N.; Kokkinos, I. Densepose: Dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7297–7306. [Google Scholar]
  53. Li, J.; Wang, C.; Zhu, H.; Mao, Y.; Fang, H.S.; Lu, C. Crowdpose: Efficient crowded scenes pose estimation and a new benchmark. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10863–10872. [Google Scholar]
  54. Meier, A.; Dyer, W.; Graham, C. Using human gestures to control a building’s heating and cooling System. In Proceedings of the 9th International Conference on Energy Efficiency in Domestic Appliances and Lighting (EEDAL’17), Irvine, CA, USA, 13–15 September 2017; pp. 627–635. [Google Scholar]
  55. Xu, M.; Han, Y.; Liu, Q.; Zhao, L. Action-based personalized dynamic thermal demand prediction with video cameras. Build. Environ. 2022, 223, 109457. [Google Scholar] [CrossRef]
  56. Liu, P.L.; Chang, C.C. Simple method integrating OpenPose and RGB-D camera for identifying 3D body landmark locations in various postures. Int. J. Ind. Ergonom. 2022, 91, 103354. [Google Scholar] [CrossRef]
  57. Wang, H.; Wang, G.; Li, X. An RGB-D camera-based indoor occupancy positioning system for complex and densely populated scenarios. Indoor Built Environ. 2023, 32, 1420326X231155112. [Google Scholar] [CrossRef]
  58. Yang, B.; Cheng, X.; Dai, D.; Olofsson, T.; Li, H.; Meier, A. Real-time and contactless measurements of thermal discomfort based on human poses for energy efficient control of buildings. Build. Environ. 2019, 162, 106284. [Google Scholar] [CrossRef]
  59. Chen, Z.; Jiang, C.; Xie, L. Building occupancy estimation and detection: A review. Energy Build. 2018, 169, 260–270. [Google Scholar] [CrossRef]
  60. Priyadarshini, R.; Mehra, R.M. Quantitative review of occupancy detection technologies. Int. J. Radio Freq. 2015, 1, 1–19. [Google Scholar]
  61. Pawar, Y.; Chopde, A.; Nandre, M. Motion detection using pir sensor. Int. Res. J. Eng. Technol. 2018, 5, 2395-0056. [Google Scholar]
  62. Hang, L.; Kim, D.H. Enhanced model-based predictive control system based on fuzzy logic for maintaining thermal comfort in IoT smart space. Appl. Sci. 2018, 8, 1031. [Google Scholar] [CrossRef] [Green Version]
  63. Cheng, C.C.; Lee, D. Enabling smart air conditioning by sensor development: A review. Sensors 2016, 16, 2028. [Google Scholar] [CrossRef] [Green Version]
  64. Peng, Y.T.; Lin, C.Y.; Sun, M.T.; Landis, C.A. Multimodality sensor system for long-term sleep quality monitoring. IEEE Trans. Biomed. Circuits Syst. 2007, 1, 217–227. [Google Scholar] [CrossRef]
  65. Choe, J.; Montserrat, D.M.; Schwichtenberg, A.J.; Delp, E.J. Sleep analysis using motion and head detection. In Proceedings of the 2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), Las Vegas, NV, USA, 8–10 April 2018; pp. 29–32. [Google Scholar]
  66. Mohammadi, S.M.; Enshaeifar, S.; Hilton, A.; Dijk, D.J.; Wells, K. Transfer learning for clinical sleep pose detection using a single 2D IR camera. IEEE T. Neur. Sys. Reh. 2020, 29, 290–299. [Google Scholar] [CrossRef]
  67. Piriyajitakonkij, M.; Warin, P.; Lakhan, P.; Leelaarporn, P.; Kumchaiseemak, N.; Suwajanakorn, S.; Pianpanit, T.; Niparnan, N.; Mukhopadhyay, S.C.; Wilaiprasitporn, T. SleepPoseNet: Multi-view learning for sleep postural transition recognition using UWB. IEEE J. Biomed. Health 2020, 25, 1305–1314. [Google Scholar] [CrossRef]
  68. Cheng, X.; Hu, F.; Yang, B.; Wang, F.; Olofsson, T. Contactless sleep posture measurements for demand-controlled sleep thermal comfort: A pilot study. Indoor Air 2022, 32, e13175. [Google Scholar] [CrossRef] [PubMed]
  69. Wang, H.; Wang, G.; Li, X. Image-based occupancy positioning system using pose-estimation model for demand-oriented ventilation. J. Build. Eng. 2021, 39, 102220. [Google Scholar] [CrossRef]
  70. Cui, Z.; Sun, Y.; Gao, D.; Ji, J.; Zou, W. Computer-vision-assisted subzone-level demand-controlled ventilation with fast occupancy adaptation for large open spaces towards balanced IAQ and energy performance. Build. Environ. 2023, 207, 110427. [Google Scholar] [CrossRef]
  71. Zhai, Y.; Miao, F.; Yang, L.; Zhao, S.; Zhang, H.; Arens, E. Using personally controlled air movement to improve comfort after simulated summer commute. Build. Environ. 2019, 165, 106329. [Google Scholar] [CrossRef] [Green Version]
  72. Bourikas, L.; Costanza, E.; Gauthier, S.; James, P.A.B.; Kittley-Davies, J.; Ornaghi, C.; Rogers, A.; Saadatian, E.; Huang, Y. Camera-based window-opening estimation in a naturally ventilated office. Build. Res. Inf. 2018, 46, 148–163. [Google Scholar] [CrossRef] [Green Version]
  73. Zheng, H.; Li, F.; Cai, H.; Zhang, K. Non-intrusive measurement method for the window opening behavior. Energy Build. 2019, 197, 171–176. [Google Scholar] [CrossRef]
  74. Luong, D.; Richman, R.; Touchie, M. Towards window state detection using image processing in residential and office building facades. Build. Environ. 2022, 207, 108486. [Google Scholar] [CrossRef]
  75. Tien, P.W.; Wei, S.; Liu, T.; Calautit, J.; Darkwa, J.; Wood, C. A deep learning approach towards the detection and recognition of opening of windows for effective management of building ventilation heat losses and reducing space heating demand. Renew. Energy 2021, 177, 603–625. [Google Scholar] [CrossRef]
  76. Sun, C.; Guo, X.; Zhao, T.; Han, Y. Real-time detection method of window opening behavior using deep learning-based image recognition in severe cold regions. Energy Build. 2022, 268, 112196. [Google Scholar] [CrossRef]
  77. Chen, X.; Zou, Z.; Hao, F.; Wang, Y.; Mei, C.; Zhou, Y.; Wang, D.; Yang, X. Remote sensing of indoor thermal environment from outside the building through window opening gap by using infrared camera. Energy Build. 2023, 286, 112975. [Google Scholar] [CrossRef]
  78. Li, J.; Liu, N. The perception, optimization strategies and prospects of outdoor thermal comfort in China: A review. Build. Environ. 2020, 170, 106614. [Google Scholar] [CrossRef]
  79. De Montigny, L.; Ling, R.; Zacharias, J. The effects of weather on walking rates in nine cities. Environ. Behav. 2012, 44, 821–840. [Google Scholar] [CrossRef]
  80. Middel, A.; Krayenhoff, E.S. Micrometeorological determinants of pedestrian thermal exposure during record-breaking heat in Tempe, Arizona: Introducing the MaRTy observational platform. Sci. Total Environ. 2019, 687, 137–151. [Google Scholar] [CrossRef] [PubMed]
  81. Yoon, H.Y.; Kim, J.H.; Jeong, J.W. Classification of the Sidewalk Condition Using Self-Supervised Transfer Learning for Wheelchair Safety Driving. Sensors 2022, 22, 380. [Google Scholar] [CrossRef]
  82. Peng, Z.; Bardhan, R.; Ellard, C.; Steemers, K. Urban climate walk: A stop-and-go assessment of the dynamic thermal sensation and perception in two waterfront districts in Rome, Italy. Build. Environ. 2022, 221, 109267. [Google Scholar] [CrossRef]
  83. Liu, W.; Zhang, Y.; Deng, Q. The effects of urban microclimate on outdoor thermal sensation and neutral temperature in hot-summer and cold-winter climate. Energy Build. 2016, 128, 190–197. [Google Scholar] [CrossRef]
  84. Yao, J.; Yang, F.; Zhuang, Z.; Shao, Y.; Yuan, P.F. The effect of personal and microclimatic variables on outdoor thermal comfort: A field study in a cold season in Lujiazui CBD, Shanghai. Sustain. Cities Soc. 2018, 39, 181–188. [Google Scholar] [CrossRef]
  85. Speak, A.F.; Salbitano, F. Summer thermal comfort of pedestrians in diverse urban settings: A mobile study. Build. Environ. 2022, 208, 108600. [Google Scholar] [CrossRef]
  86. Cui, Y.; Yan, D.; Hong, T.; Ma, J. Temporal and spatial characteristics of the urban heat island in Beijing and the impact on building design and energy performance. Energy 2017, 130, 286–297. [Google Scholar] [CrossRef] [Green Version]
  87. Van Hove, L.W.A.; Jacobs, C.M.J.; Heusinkveld, B.G.; Elbers, J.A.; Van Driel, B.L.; Holtslag, A.A.M. Temporal and spatial variability of urban heat island and thermal comfort within the Rotterdam agglomeration. Build. Environ. 2015, 83, 91–103. [Google Scholar] [CrossRef] [Green Version]
  88. Chen, Y.C.; Yao, C.K.; Honjo, T.; Lin, T.P. The application of a high-density street-level air temperature observation network (HiSAN): Dynamic variation characteristics of urban heat island in Tainan, Taiwan. Sci. Total Environ. 2018, 626, 555–566. [Google Scholar] [CrossRef] [PubMed]
  89. Oke, T.R.; Mills, G.; Christen, A.; Voogt, J.A. Urban Climates; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  90. Pigliautile, I.; Pisello, A.L. A new wearable monitoring system for investigating pedestrians’ environmental conditions: Development of the experimental tool and start-up findings. Sci. Total Environ. 2018, 630, 690–706. [Google Scholar] [CrossRef] [PubMed]
  91. Cureau, R.J.; Pigliautile, I.; Pisello, A.L. A new wearable system for sensing outdoor environmental conditions for monitoring hyper-microclimate. Sensors 2022, 22, 502. [Google Scholar] [CrossRef]
  92. Pigliautile, I.; Pisello, A.L. Environmental data clustering analysis through wearable sensing techniques: New bottom-up process aimed to identify intra-urban granular morphologies from pedestrian transects. Build. Environ. 2020, 171, 106641. [Google Scholar] [CrossRef]
  93. Tsin, P.K.; Knudby, A.; Krayenhoff, E.S.; Ho, H.C.; Brauer, M.; Henderson, S.B. Microscale mobile monitoring of urban air temperature. Urban Clim. 2016, 18, 58–72. [Google Scholar] [CrossRef] [Green Version]
  94. Nakayoshi, M.; Kanda, M.; Shi, R.; de Dear, R. Outdoor thermal physiology along human pathways: A study using a wearable measurement system. Int. J. Biometeorol. 2015, 59, 503–515. [Google Scholar] [CrossRef]
  95. Dam, N.; Ricketts, A.; Catlett, B.; Henriques, J. Wearable sensors for analyzing personal exposure to air pollution. In Proceedings of the 2017 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 28–28 April 2017; pp. 1–4. [Google Scholar]
  96. Saoutieff, E.; Polichetti, T.; Jouanet, L.; Faucon, A.; Vidal, A.; Pereira, A.; Boisseau, S.; Ernst, T.; Miglietta, M.L.; Alfano, B.; et al. A wearable low-power sensing platform for environmental and health monitoring: The convergence project. Sensors 2021, 21, 1802. [Google Scholar] [CrossRef]
  97. Deng, Y.; Chen, C.; Xian, X.; Tsow, F.; Verma, G.; McConnell, R.; Fruin, S.; Tao, N.; Forzani, E.S. A novel wireless wearable volatile organic compound (VOC) monitoring device with disposable sensors. Sensors 2016, 16, 2060. [Google Scholar] [CrossRef]
  98. Gallinelli, P.; Camponovo, R.; Guillot, V. CityFeel-micro climate monitoring for climate mitigation and urban design. Energy Procedia 2017, 122, 391–396. [Google Scholar] [CrossRef]
  99. Kulkarni, K.K.; Schneider, F.A.; Gowda, T.; Jayasuriya, S.; Middel, A. MaRTiny-A low-cost biometeorological sensing device with embedded computer vision for urban climate research. Front. Env. Sci. 2022, 10, 550. [Google Scholar] [CrossRef]
  100. Yang, J.; Wong, M.S.; Ho, H.C.; Krayenhoff, E.S.; Chan, P.W.; Abbas, S.; Menenti, M. A semi-empirical method for estimating complete surface temperature from radiometric surface temperature, a study in Hong Kong city. Remote Sens. Environ. 2020, 237, 111540. [Google Scholar] [CrossRef]
  101. Zhao, Z.; Sharifi, A.; Dong, X.; Shen, L.; He, B.J. Spatial variability and temporal heterogeneity of surface urban heat island patterns and the suitability of local climate zones for land surface temperature characterization. Remote Sens. 2021, 13, 4338. [Google Scholar] [CrossRef]
  102. Voogt, J.A.; Oke, T.R. Thermal remote sensing of urban climates. Remote Sens. Environ. 2003, 86, 370–384. [Google Scholar] [CrossRef]
  103. Sun, R.; Chen, L. How can urban water bodies be designed for climate adaptation? Landsc. Urban Plan. 2012, 105, 27–33. [Google Scholar] [CrossRef]
  104. Da Silva Espinoza, N.; dos Santos, C.A.C.; de Oliveira, M.B.L.; Silva, M.T.; Santos, C.A.G.; da Silva, R.M.; Mishra, M.; Ferreira, R.R. Assessment of urban heat islands and thermal discomfort in the Amazonia biome in Brazil: A case study of Manaus city. Build. Environ. 2023, 227, 109772. [Google Scholar] [CrossRef]
  105. Pearsall, H. Staying cool in the compact city: Vacant land and urban heating in Philadelphia, Pennsylvania. Appl. Geogr. 2017, 79, 84–92. [Google Scholar] [CrossRef]
  106. Wang, C.; Li, Y.; Myint, S.W.; Zhao, Q.; Wentz, E.A. Impacts of spatial clustering of urban land cover on land surface temperature across Köppen climate zones in the contiguous United States. Landsc. Urban Plan. 2019, 192, 103668. [Google Scholar] [CrossRef]
  107. Stathopoulou, M.I.; Cartalis, C.; Keramitsoglou, I.; Santamouris, M. Thermal remote sensing of Thom’s discomfort index (DI): Comparison with in-situ measurements. In Proceedings of the Remote Sensing for Environmental Monitoring, GIS Applications, and Geology V, Bruges, Belgium, 29 October 2005; SPIE: Bellingham, WA, USA, 2005; pp. 131–139. [Google Scholar]
  108. Xu, H.; Hu, X.; Guan, H.; He, G. Development of a fine-scale discomfort index map and its application in measuring living environments using remotely-sensed thermal infrared imagery. Energy Build. 2017, 150, 598–607. [Google Scholar] [CrossRef]
  109. Mijani, N.; Alavipanah, S.K.; Firozjaei, M.K.; Arsanjani, J.J.; Hamzeh, S.; Weng, Q. Modeling outdoor thermal comfort using satellite imagery: A principle component analysis-based approach. Ecol. Indic. 2020, 117, 106555. [Google Scholar] [CrossRef]
  110. Li, X.; Ratti, C. Mapping the spatio-temporal distribution of solar radiation within street canyons of Boston using Google Street View panoramas and building height model. Landsc. Urban Plan. 2019, 191, 103387. [Google Scholar] [CrossRef]
  111. Fabbri, K.; Costanzo, V. Drone-assisted infrared thermography for calibration of outdoor microclimate simulation models. Sustain. Cities Soc. 2020, 52, 101855. [Google Scholar] [CrossRef]
  112. Asawa, T.; Oshio, H.; Tanaka, K. Portable recording system for spherical thermography and its application to longwave mean radiant temperature estimation. Build. Environ. 2022, 222, 109412. [Google Scholar] [CrossRef]
  113. Gil, E.; Lerma, C.; Vercher, J.; Mas, Á. Methodology for thermal behaviour assessment of homogeneous façades in heritage buildings. J. Sens. 2017, 2017, 3280691. [Google Scholar] [CrossRef] [Green Version]
  114. Lee, S.; Moon, H.; Choi, Y.; Yoon, D.K. Analyzing thermal characteristics of urban streets using a thermal imaging camera: A case study on commercial streets in Seoul, Korea. Sustainability 2018, 10, 519. [Google Scholar] [CrossRef] [Green Version]
  115. Zhao, X.; Luo, Y.; He, J. Analysis of the thermal environment in pedestrian space using 3D thermal imaging. Energies 2020, 13, 3674. [Google Scholar] [CrossRef]
  116. Martin, M.; Chong, A.; Biljecki, F.; Miller, C. Infrared thermography in the built environment: A multi-scale review. Renew. Sust. Energ. Rev. 2022, 165, 112540. [Google Scholar] [CrossRef]
  117. Yu, K.; Chen, Y.; Wang, D.; Chen, Z.; Gong, A.; Li, J. Study of the seasonal effect of building shadows on urban land surface temperatures based on remote sensing data. Remote Sens. 2019, 11, 497. [Google Scholar] [CrossRef] [Green Version]
  118. Sun, Y.; Gao, C.; Li, J.; Gao, M.; Ma, R. Assessing the cooling efficiency of urban parks using data envelopment analysis and remote sensing data. Theor. Appl. Climatol. 2021, 145, 903–916. [Google Scholar] [CrossRef]
  119. Lee, S.; Moon, H.; Choi, Y.; Yoon, D.K. Urban morphology detection and computation for urban climate research. Landsc. Urban Plan. 2017, 167, 212–224. [Google Scholar]
  120. Vanhoey, K.; de Oliveira, C.E.P.; Riemenschneider, H.; Bódis-Szomorú, A.; Manén, S.; Paudel, D.P.; Gygli, M.; Kobyshev, N.; Kroeger, T.; Dai, D.; et al. VarCity-the video: The struggles and triumphs of leveraging fundamental research results in a graphics video production. In Proceedings of the ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, Los Angeles, CA, USA, 30 July–3 August 2017; pp. 1–2. [Google Scholar]
  121. Xian, G.; Shi, H.; Auch, R.; Gallo, K.; Zhou, Q.; Wu, Z.; Kolian, M. The effects of urban land cover dynamics on urban heat Island intensity and temporal trends. GiSci. Remote Sens. 2021, 58, 501–515. [Google Scholar] [CrossRef]
  122. Wang, B.; Zhao, W.; Gao, P.; Zhang, Y.; Wang, Z. Crack damage detection method via multiple visual features and efficient multi-task learning model. Sensors 2018, 18, 1796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  123. Wang, L.; Xu, X.; Dong, H.; Gui, R.; Pu, F. Multi-pixel simultaneous classification of PolSAR image using convolutional neural networks. Sensors 2018, 18, 769. [Google Scholar] [CrossRef] [Green Version]
  124. Wurm, M.; Stark, T.; Zhu, X.X.; Weigand, M.; Taubenböck, H. Semantic segmentation of slums in satellite images using transfer learning on fully convolutional neural networks. ISPRS J. Photogramm. 2019, 150, 59–69. [Google Scholar] [CrossRef]
  125. Amirkolaee, H.A.; Arefi, H. Height estimation from single aerial images using a deep convolutional encoder-decoder network. ISPRS J. Photogramm. 2019, 149, 50–66. [Google Scholar] [CrossRef]
  126. Smart, N.; Eisenman, T.S.; Karvonen, A. Street tree density and distribution: An international analysis of five capital cities. Front. Ecol. Evol. 2020, 8, 562646. [Google Scholar] [CrossRef]
  127. Huang, C.; Yang, J.; Clinton, N.; Yu, L.; Huang, H.; Dronova, I.; Jin, J. Mapping the maximum extents of urban green spaces in 1039 cities using dense satellite images. Environ. Res. Lett. 2021, 16, 064072. [Google Scholar] [CrossRef]
  128. Huang, Y.; Lin, T.; Zhang, G.; Zhu, Y.; Zeng, Z.; Ye, H. Spatial patterns of urban green space and its actual utilization status in China based on big data analysis. Big Earth Data 2021, 5, 391–409. [Google Scholar] [CrossRef]
  129. Hong, X.; Sheridan, S.; Li, D. Mapping built environments from UAV imagery: A tutorial on mixed methods of deep learning and GIS. Comput. Urban Sci. 2022, 2, 12. [Google Scholar] [CrossRef]
  130. Hu, T.; Wei, D.; Su, Y.; Wang, X.; Zhang, J.; Sun, X.; Liu, Y.; Guo, Q. Quantifying the shape of urban street trees and evaluating its influence on their aesthetic functions based mobile lidar data. ISPRS J. Photogramm. 2022, 184, 203–214. [Google Scholar] [CrossRef]
  131. Ren, C.; Cai, M.; Li, X.; Shi, Y.; See, L. Developing a rapid method for 3-dimensional urban morphology extraction using open-source data. Sustain. Cities Soc. 2020, 53, 101962. [Google Scholar] [CrossRef]
  132. Li, X.; Wang, G. Examining runner’s outdoor heat exposure using urban microclimate modeling and GPS trajectory mining. Comput. Environ. Urban 2021, 89, 101678. [Google Scholar] [CrossRef]
  133. Fox, J.; Osmond, P.; Peters, A. The effect of building facades on outdoor microclimate—Reflectance recovery from terrestrial multispectral images using a robust empirical line method. Climate 2018, 6, 56. [Google Scholar] [CrossRef] [Green Version]
  134. Li, X.; Zhang, C.; Li, W.; Ricard, R.; Meng, Q.; Zhang, W. Assessing street-level urban greenery using Google Street View and a modified green view index. Urban For. Urban Gree. 2015, 14, 675–685. [Google Scholar] [CrossRef]
  135. Biljecki, F.; Ito, K. Street view imagery in urban analytics and GIS: A review. Landsc. Urban Plan. 2021, 215, 104217. [Google Scholar] [CrossRef]
  136. Li, Y.; Peng, L.; Wu, C.; Zhang, J. Street view imagery (svi) in the built environment: A theoretical and systematic review. Buildings 2022, 12, 1167. [Google Scholar] [CrossRef]
  137. Gong, Z.; Ma, Q.; Kan, C.; Qi, Q. Classifying Street spaces with street view images for a spatial indicator of urban functions. Sustainability 2019, 11, 6424. [Google Scholar] [CrossRef] [Green Version]
  138. Jamei, E.; Rajagopalan, P.; Seyedmahmoudian, M.; Jamei, Y. Review on the impact of urban geometry and pedestrian level greening on outdoor thermal comfort. Renew. Sust. Energ. Rev. 2016, 54, 1002–1017. [Google Scholar] [CrossRef]
  139. Klemm, W.; Heusinkveld, B.G.; Lenzholzer, S.; van Hove, B. Street greenery and its physical and psychological impact on thermal comfort. Landsc. Urban Plan. 2015, 138, 87–98. [Google Scholar] [CrossRef]
  140. Yang, J.; Shi, B.; Xia, G.; Xue, Q.; Cao, S.J. Impacts of urban form on thermal environment near the surface region at pedestrian height: A case study based on high-density built-up areas of Nanjing City in China. Sustainability 2020, 12, 1737. [Google Scholar] [CrossRef] [Green Version]
  141. Kim, Y.J.; Brown, R.D. A multilevel approach for assessing the effects of microclimatic urban design on pedestrian thermal comfort: The High Line in New York. Build. Environ. 2021, 205, 108244. [Google Scholar] [CrossRef]
  142. Kim, S.W.; Brown, R.D. Pedestrians’ behavior based on outdoor thermal comfort and micro-scale thermal environments, Austin, TX. Sci. Total Environ. 2022, 808, 152143. [Google Scholar] [CrossRef] [PubMed]
  143. Abdelhafez, M.H.H.; Altaf, F.; Alshenaifi, M.; Hamdy, O.; Ragab, A. Achieving effective thermal performance of street canyons in various climatic zones. Sustainability 2022, 14, 10780. [Google Scholar] [CrossRef]
  144. Oke, T.R. Canyon geometry and the nocturnal urban heat island: Comparison of scale model and field observations. J. Climatol. 1981, 1, 237–254. [Google Scholar] [CrossRef]
  145. Lin, T.P.; Tsai, K.T.; Hwang, R.L.; Matzarakis, A. Quantification of the effect of thermal indices and sky view factor on park attendance. Landsc. Urban Plan. 2012, 107, 137–146. [Google Scholar] [CrossRef]
  146. Oke, T.R. Street design and urban canopy layer climate. Energy Build. 1988, 11, 103–113. [Google Scholar] [CrossRef]
  147. Scarano, M.; Mancini, F. Assessing the relationship between sky view factor and land surface temperature to the spatial resolution. Int. J. Remote Sens. 2017, 38, 6910–6929. [Google Scholar] [CrossRef]
  148. Watson, I.D.; Johnson, G.T. Graphical estimation of sky view-factors in urban environments. J. Climatol. 1987, 7, 193–197. [Google Scholar] [CrossRef]
  149. Chapman, L.; Thornes, J.E.; Bradley, A.V. Sky-view factor approximation using GPS receivers. Int. J. Climatol. 2002, 22, 615–621. [Google Scholar] [CrossRef]
  150. Brown, M.J.; Grimmond, S.; Ratti, C. Comparison of Methodologies for Computing Sky View Factor in Urban Environments; Los Alamos National Lab: Los Alamos, NM, USA, 2001. [Google Scholar]
  151. Miao, C.; Yu, S.; Hu, Y.; Zhang, H.; He, X.; Chen, W. Review of methods used to estimate the sky view factor in urban street canyons. Build. Environ. 2020, 168, 106497. [Google Scholar] [CrossRef]
  152. Holmer, B. A simple operative method for determination of sky view factors in complex urban canyons from fisheye photographs. Meteorol. Z 1992, 1, 236–239. [Google Scholar] [CrossRef]
  153. Steyn, D.G. The calculation of view factors from fisheye-lens photographs: Research note. Atmos. Ocean 1980, 18, 254–258. [Google Scholar] [CrossRef]
  154. Chen, L.; Ng, E.; An, X.; Ren, C.; Lee, M.; Wang, U.; He, Z. Sky view factor analysis of street canyons and its implications for daytime intra-urban air temperature differentials in high-rise, high-density urban areas of Hong Kong: A GIS-based simulation approach. Int. J. Climatol. 2012, 32, 121–136. [Google Scholar] [CrossRef]
  155. Gong, F.Y.; Zeng, Z.C.; Zhang, F.; Li, X.; Ng, E.; Norford, L.K. Mapping sky, tree, and building view factors of street canyons in a high-density urban environment. Build. Environ. 2018, 134, 155–167. [Google Scholar] [CrossRef]
  156. Zeng, L.; Lu, J.; Li, W.; Li, Y. A fast approach for large-scale Sky View Factor estimation using street view images. Build. Environ. 2018, 135, 74–84. [Google Scholar] [CrossRef]
  157. Middel, A.; Lukasczyk, J.; Maciejewski, R. Sky view factors from synthetic fisheye photos for thermal comfort routing–A case study in Phoenix, Arizona. Urban Plan. 2017, 2, 19–30. [Google Scholar] [CrossRef] [Green Version]
  158. Carrasco-Hernandez, R.; Smedley, A.R.D.; Webb, A.R. Using urban canyon geometries obtained from Google Street View for atmospheric studies, Potential applications in the calculation of street level total shortwave irradiances. Energy Build. 2015, 86, 340–348. [Google Scholar] [CrossRef]
  159. Xia, Y.; Yabuki, N.; Fukuda, T. Sky view factor estimation from street view images based on semantic segmentation. Urban Clim. 2021, 40, 100999. [Google Scholar] [CrossRef]
  160. Liang, J.; Gong, J.; Sun, J.; Zhou, J.; Li, W.; Li, Y.; Liu, J.; Shen, S. Automatic sky view factor estimation from street view photographs—A big data approach. Remote Sens. 2017, 9, 411. [Google Scholar] [CrossRef] [Green Version]
  161. Gong, F.Y.; Zeng, Z.C.; Ng, E.; Norford, L.K. Spatiotemporal patterns of street-level solar radiation estimated using Google Street View in a high-density urban environment. Build. Environ. 2019, 148, 547–566. [Google Scholar] [CrossRef]
  162. Du, K.; Ning, J.; Yan, L. How long is the sun duration in a street canyon?—Analysis of the view factors of street canyons. Build. Environ. 2020, 172, 106680. [Google Scholar] [CrossRef]
  163. Nice, K.A.; Wijnands, J.S.; Middel, A.; Wang, J.; Qiu, Y.; Zhao, N.; Thompson, J.; Aschwanden, G.D.P.A.; Zhao, H.; Stevenson, M. Sky pixel detection in outdoor imagery using an adaptive algorithm and machine learning. Urban Clim. 2020, 31, 100572. [Google Scholar] [CrossRef]
  164. Urban, J.; Pikl, M.; Zemek, F.; Novotný, J. Using Google Street View photographs to assess long-term outdoor thermal perception and thermal comfort in the urban environment during heatwaves. Front. Env. Sci. 2022, 10, 878341. [Google Scholar] [CrossRef]
  165. Doersch, C.; Singh, S.; Gupta, A.; Sivic, J.; Efros, A. What makes paris look like paris? ACM Trans. Graphic. 2012, 31, 101. [Google Scholar] [CrossRef]
  166. Kang, J.; Körner, M.; Wang, Y.; Taubenböck, H.; Zhu, X.X. Building instance classification using street view images. ISPRS J. Photogramm. 2018, 145, 44–59. [Google Scholar] [CrossRef]
  167. Deng, Z.; Chen, Y.; Pan, X.; Peng, Z.; Yang, J. Integrating GIS-based point of interest and community boundary datasets for urban building energy modeling. Energies 2021, 14, 1049. [Google Scholar] [CrossRef]
  168. Koch, D.; Despotovic, M.; Sakeena, M.; Döller, M.; Zeppelzauer, M. Visual estimation of building condition with patch-level ConvNets. In Proceedings of the 2018 ACM Workshop on Multimedia for Real Estate Tech, Yokohama, Japan, 11 June 2018; pp. 12–17. [Google Scholar]
  169. Zeppelzauer, M.; Despotovic, M.; Sakeena, M.; Koch, D.; Döller, M. Automatic prediction of building age from photographs. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, Yokohama, Japan, 11–14 June 2018; pp. 126–134. [Google Scholar]
  170. Li, Y.; Chen, Y.; Rajabifard, A.; Khoshelham, K.; Aleksandrov, M. Estimating building age from Google street view images using deep learning (short paper). In Proceedings of the 10th International Conference on Geographic Information Science (GIScience 2018), Melbourne, Australia, 28–31 August 2018; pp. 1–7. [Google Scholar]
  171. Kim, H.; Han, S. Interactive 3D building modeling method using panoramic image sequences and digital map. Multimed. Tools Appl. 2018, 77, 27387–27404. [Google Scholar] [CrossRef]
  172. Kraff, N.J.; Wurm, M.; Taubenböck, H. The dynamics of poor urban areas-analyzing morphologic transformations across the globe using Earth observation data. Cities 2020, 107, 102905. [Google Scholar] [CrossRef]
  173. Zhong, T.; Ye, C.; Wang, Z.; Tang, G.; Zhang, W.; Ye, Y. City-scale mapping of urban façade color using street-view imagery. Remote Sens. 2021, 13, 1591. [Google Scholar] [CrossRef]
  174. Zhang, J.; Fukuda, T.; Yabuki, N. Development of a city-scale approach for façade color measurement with building functional classification using deep learning and street view images. ISPRS Int. J. Geo-Inf. 2021, 10, 551. [Google Scholar] [CrossRef]
  175. Rosenfelder, M.; Wussow, M.; Gust, G.; Cremades, R.; Neumann, D. Predicting residential electricity consumption using aerial and street view images. Appl. Energy 2021, 301, 117407. [Google Scholar] [CrossRef]
  176. Li, X.; Zhang, C.; Li, W. Building block level urban land-use information retrieval based on Google Street View images. GiSci. Remote Sens. 2017, 54, 819–835. [Google Scholar] [CrossRef]
  177. Cao, R.; Zhu, J.; Tu, W.; Li, Q.; Cao, J.; Liu, B.; Zhang, Q.; Qiu, G. Integrating aerial and street view images for urban land use classification. Remote Sens. 2018, 10, 1553. [Google Scholar] [CrossRef] [Green Version]
  178. Yu, Y.; Fang, F.; Liu, Y.; Li, S.; Luo, Z. Urban land use classification using street view images based on deep transfer network. In Urban Intelligence and Applications: Proceedings of ICUIA 2019; Springer International Publishing: Cham, Switzerland, 2020; pp. 83–95. [Google Scholar]
  179. Hu, F.; Liu, W.; Lu, J.; Song, C.; Meng, Y.; Wang, J.; Xing, H. Urban function as a new perspective for adaptive street quality assessment. Sustainability 2020, 12, 1296. [Google Scholar] [CrossRef] [Green Version]
  180. Ye, C.; Zhang, F.; Mu, L.; Gao, Y.; Liu, Y. Urban function recognition by integrating social media and street-level imagery. Environ. Plann. B Urban Anal. City Sci. 2021, 48, 1430–1444. [Google Scholar] [CrossRef]
  181. Ning, H.; Ye, X.; Chen, Z.; Liu, T.; Cao, T. Sidewalk extraction using aerial and street view images. Environ. Plann. B Urban Anal. City Sci. 2022, 49, 7–22. [Google Scholar] [CrossRef]
  182. Li, M.; Sheng, H.; Irvin, J.; Chung, H.; Ying, A.; Sun, T.; Ng, A.Y.; Rodriguez, D.A. Marked crosswalks in US transit-oriented station areas, 2007–2020: A computer vision approach using street view imagery. Environ. Plann. B Urban Anal. City Sci. 2023, 50, 350–369. [Google Scholar] [CrossRef]
  183. Li, X.; Ning, H.; Huang, X.; Dadashova, B.; Kang, Y.; Ma, A. Urban infrastructure audit: An effective protocol to digitize signalized intersections by mining street view images. Cartogr. Geogr. Inf. Sci. 2022, 49, 32–49. [Google Scholar] [CrossRef]
  184. Ibrahim, M.R.; Haworth, J.; Cheng, T. Understanding cities with machine eyes: A review of deep computer vision in urban analytics. Cities 2020, 96, 102481. [Google Scholar] [CrossRef]
  185. Aram, F.; Solgi, E.; Garcia, E.H.; Mosavi, A. Urban heat resilience at the time of global warming: Evaluating the impact of the urban parks on outdoor thermal comfort. Environ. Sci. Eur. 2020, 32, 17. [Google Scholar] [CrossRef]
  186. Zhou, H.; Tao, G.; Yan, X.; Sun, J. Influences of greening and structures on urban thermal environments: A case study in Xuzhou City, China. Urban For. Urban Gree. 2021, 66, 127386. [Google Scholar] [CrossRef]
  187. Wang, R.; Yang, B.; Yao, Y.; Bloom, M.S.; Feng, Z.; Yuan, Y.; Zhang, J.; Liu, P.; Wu, W.; Lu, Y.; et al. Residential greenness, air pollution and psychological well-being among urban residents in Guangzhou, China. Sci. Total Environ. 2020, 711, 134843. [Google Scholar] [CrossRef]
  188. Suppakittpaisarn, P.; Jiang, B.; Slavenas, M.; Sullivan, W.C. Does density of green infrastructure predict preference? Urban For. Urban Gree. 2019, 40, 236–244. [Google Scholar] [CrossRef]
  189. Gupta, K.; Kumar, P.; Pathan, S.K.; Sharma, K.P. Urban Neighborhood Green Index–A measure of green spaces in urban areas. Landsc. Urban Plan. 2012, 105, 325–335. [Google Scholar] [CrossRef]
  190. Shah, A.; Garg, A.; Mishra, V. Quantifying the local cooling effects of urban green spaces: Evidence from Bengaluru, India. Landsc. Urban Plan. 2021, 209, 104043. [Google Scholar] [CrossRef]
  191. Yu, Z.; Yang, G.; Zuo, S.; Jørgensen, G.; Koga, M.; Vejre, H. Critical review on the cooling effect of urban blue-green space: A threshold-size perspective. Urban For. Urban Gree. 2020, 49, 126630. [Google Scholar] [CrossRef]
  192. Ye, Y.; Xie, H.; Fang, J.; Jiang, H.; Wang, D. Daily accessed street greenery and housing price: Measuring economic performance of human-scale streetscapes via new urban data. Sustainability 2019, 11, 1741. [Google Scholar] [CrossRef] [Green Version]
  193. Yang, J.; Rong, H.; Kang, Y.; Zhang, F.; Chegut, A. The financial impact of street-level greenery on New York commercial buildings. Landsc. Urban Plan. 2021, 214, 104162. [Google Scholar] [CrossRef]
  194. Jing, F.; Liu, L.; Zhou, S.; Song, J.; Wang, L.; Zhou, H.; Wang, Y.; Ma, R. Assessing the impact of street-view greenery on fear of neighborhood crime in Guangzhou, China. Int. J. Environ. Res. Public Health 2021, 18, 311. [Google Scholar] [CrossRef]
  195. Xiao, Y.; Zhang, Y.; Sun, Y.; Tao, P.; Kuang, X. Does green space really matter for residents’ obesity? A new perspective from Baidu Street View. Front. Public Health 2020, 8, 332. [Google Scholar] [CrossRef]
  196. He, H.; Lin, X.; Yang, Y.; Lu, Y. Association of street greenery and physical activity in older adults: A novel study using pedestrian-centered photographs. Urban For. Urban Gree. 2020, 55, 126789. [Google Scholar] [CrossRef]
  197. Yang, J.; Zhao, L.; Mcbride, J.; Gong, P. Can you see green? Assessing the visibility of urban forests in cities. Landsc. Urban Plan. 2009, 91, 97–104. [Google Scholar] [CrossRef]
  198. Riihimäki, H.; Luoto, M.; Heiskanen, J. Estimating fractional cover of tundra vegetation at multiple scales using unmanned aerial systems and optical satellite data. Remote Sens. Environ. 2019, 224, 119–132. [Google Scholar] [CrossRef]
  199. Barbierato, E.; Bernetti, I.; Capecchi, I.; Saragosa, C. Integrating remote sensing and street view images to quantify urban forest ecosystem services. Remote Sens. 2020, 12, 329. [Google Scholar] [CrossRef] [Green Version]
  200. Lumnitz, S.; Devisscher, T.; Mayaud, J.R.; Radic, V.; Coops, N.C.; Griess, V.C. Mapping trees along urban street networks with deep learning and street-level imagery. ISPRS J. Photogramm. 2021, 175, 144–157. [Google Scholar] [CrossRef]
  201. Ki, D.; Lee, S. Analyzing the effects of Green View Index of neighborhood streets on walking time using Google Street View and deep learning. Landsc. Urban Plan. 2021, 205, 103920. [Google Scholar] [CrossRef]
  202. Yu, H.; Zhou, Y.; Wang, R.; Qian, Z.; Knibbs, L.D.; Jalaludin, B.; Schootman, M.; McMillin, S.E.; Howard, S.W.; Lin, L.Z.; et al. Associations between trees and grass presence with childhood asthma prevalence using deep learning image segmentation and a novel green view index. Environ. Pollut. 2021, 286, 117582. [Google Scholar] [CrossRef]
  203. Wang, B.; Li, L.; Nakashima, Y.; Kawasaki, R.; Nagahara, H.; Yagi, Y. Noisy-LSTM: Improving temporal awareness for video semantic segmentation. IEEE Access 2021, 9, 46810–46820. [Google Scholar] [CrossRef]
  204. Dong, R.; Zhang, Y.; Zhao, J. How green are the streets within the sixth ring road of Beijing? An analysis based on Tencent street view pictures and the green view index. Int. J. Environ. Res. Public Health 2018, 15, 1367. [Google Scholar] [CrossRef] [Green Version]
  205. Kumakoshi, Y.; Chan, S.Y.; Koizumi, H.; Li, X.; Yoshimura, Y. Standardized green view index and quantification of different metrics of urban green vegetation. Sustainability 2020, 12, 7434. [Google Scholar] [CrossRef]
  206. Chiang, Y.C.; Liu, H.H.; Li, D.; Ho, L.C. Quantification through deep learning of sky view factor and greenery on urban streets during hot and cool seasons. Landsc. Urban Plan. 2023, 232, 104679. [Google Scholar] [CrossRef]
  207. Zhang, J.; Hu, A. Analyzing green view index and green view index best path using Google street view and deep learning. J. Comput. Des. Eng. 2022, 9, 2010–2023. [Google Scholar] [CrossRef]
  208. Tong, M.; She, J.; Tan, J.; Li, M.; Ge, R.; Gao, Y. Evaluating street greenery by multiple indicators using street-level imagery and satellite images: A Case Study In Nanjing, China. Forests 2020, 11, 1347. [Google Scholar] [CrossRef]
  209. Branson, S.; Wegner, J.D.; Hall, D.; Lang, N.; Schindler, K.; Perona, P. From Google Maps to a fine-grained catalog of street trees. ISPRS J. Photogramm. Remote Sens. 2018, 135, 13–30. [Google Scholar] [CrossRef] [Green Version]
  210. Choi, K.; Lim, W.; Chang, B.; Jeong, J.; Kim, I.; Park, C.R.; Ko, D.W. An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images. ISPRS J. Photogramm. Remote Sens. 2022, 190, 165–180. [Google Scholar] [CrossRef]
  211. Seiferling, I.; Naik, N.; Ratti, C.; Proulx, R. Green streets—Quantifying and mapping urban trees with street-level imagery and computer vision. Landsc. Urban Plan. 2017, 165, 93–101. [Google Scholar] [CrossRef]
  212. Liu, D.; Jiang, Y.; Wang, R.; Lu, Y. Establishing a citywide street tree inventory with street view images and computer vision techniques. Comput. Environ. Urban Syst. 2023, 100, 101924. [Google Scholar] [CrossRef]
  213. Yue, N.; Zhang, Z.; Jiang, S.; Chen, S. Deep feature migration for real-time mapping of urban street shading coverage index based on street-level panorama images. Remote Sens. 2022, 14, 1796. [Google Scholar] [CrossRef]
  214. Wong, P.K.Y.; Luo, H.; Wang, M.; Cheng, J.C.P. Enriched and discriminative convolutional neural network features for pedestrian re-identification and trajectory modeling. Comput. Aided Civ. Infrastruct. Eng. 2022, 37, 573–592. [Google Scholar] [CrossRef]
  215. Tokuda, E.K.; Lockerman, Y.; Ferreira, G.B.A.; Sorrelgreen, E.; Boyle, D.; Cesar, R.M.; Silva, C.T. A new approach for pedestrian density estimation using moving sensors and computer vision. ACM Trans. Spat. Algorithms Syst. 2020, 6, 26. [Google Scholar] [CrossRef]
  216. Li, Z.; Ma, J. Discussing street tree planning based on pedestrian volume using machine learning and computer vision. Build. Environ. 2022, 219, 109178. [Google Scholar] [CrossRef]
  217. Martani, C.; Stent, S.; Acikgoz, S.; Soga, K.; Bain, D.; Jin, Y. Pedestrian monitoring techniques for crowd-flow prediction. Proc. Inst. Civ. Eng. Smart Infrastruct. Constr. 2017, 170, 17–27. [Google Scholar] [CrossRef]
  218. Malinovskiy, Y.; Zheng, J.; Wang, Y. Model-free video detection and tracking of pedestrians and bicyclists. Comput. Aided Civ. Infrastruct. Eng. 2009, 24, 157–168. [Google Scholar] [CrossRef]
  219. Batty, M. Urban analytics defined. Environ. Plan. B Urban Anal. City Sci. 2019, 46, 403–405. [Google Scholar] [CrossRef] [Green Version]
  220. Ashraf, S. A proactive role of IoT devices in building smart cities. Internet Things Cyber Phys. Syst. 2021, 1, 8–13. [Google Scholar] [CrossRef]
  221. Feng, H.; Chen, D.; Lv, H. Sensible and secure IoT communication for digital twins, cyber twins, web twins. Internet Things Cyber Phys. Syst. 2021, 1, 34–44. [Google Scholar] [CrossRef]
  222. Cheng, C.; Dou, J.; Zheng, Z. Energy-efficient SDN for Internet of Things in smart city. Internet Things Cyber Phys. Syst. 2022, 2, 145–158. [Google Scholar] [CrossRef]
  223. Paneru, S.; Jeelani, I. Computer vision applications in construction: Current state, opportunities & challenges. Autom. Constr. 2021, 132, 103940. [Google Scholar]
  224. Tavares, P.; Costa, C.M.; Rocha, L.; Malaca, P.; Costa, P.; Moreira, A.P.; Sousa, A.; Veiga, G. Collaborative welding system using BIM for robotic reprogramming and spatial augmented reality. Autom. Constr. 2019, 106, 102825. [Google Scholar] [CrossRef]
  225. Moon, S.; Becerik-Gerber, B.; Soibelman, L. Virtual learning for workers in robot deployed construction sites. In Advances in Informatics and Computing in Civil and Construction Engineering; Proceedings of the 35th CIB W78 2018 Conference: IT in Design, Construction, and Management, Chicago, IL, USA, 1–3 October 2019; Mutis, I., Hartmann, T., Eds.; Springer: Cham, Switzerland, 2019; pp. 889–895. [Google Scholar]
  226. Chu, B.; Jung, K.; Lim, M.T.; Hong, D. Robot-based construction automation: An application to steel beam assembly (Part I). Autom. Constr. 2013, 32, 46–61. [Google Scholar] [CrossRef]
  227. Li, J.; Wang, Y.; Zhang, K.; Wang, Z.; Lu, J. Design and analysis of demolition robot arm based on finite element method. Adv. Mech. Eng. 2019, 11, 1–9. [Google Scholar] [CrossRef] [Green Version]
  228. Cui, J.; Liew, L.S.; Sabaliauskaite, G.; Zhou, F. A review on safety failures, security attacks, and available countermeasures for autonomous vehicles. Ad. Hoc. Netw. 2019, 90, 101823. [Google Scholar] [CrossRef]
  229. Silva Oliveira, A.S.; dos Reis, M.C.; da Mota, F.A.X.; Martinez, M.E.M.; Alexandria, A.R. New trends on computer vision applied to mobile robot localization. Internet Things Cyber Phys. Syst. 2022, 2, 63–69. [Google Scholar] [CrossRef]
  230. Li, H.; Luo, X.; Skitmore, M. Intelligent hoisting with car-like mobile robots. J. Constr. Eng. Manag. 2020, 146, 04020136. [Google Scholar] [CrossRef]
  231. Kim, D.; Lee, S.H.; Kamat Vineet, R. Proximity prediction of mobile objects to prevent contact-driven accidents in co-robotic construction. J. Comput. Civ. Eng. 2020, 34, 04020022. [Google Scholar] [CrossRef]
  232. Wang, Z.; Li, H.; Zhang, X. Construction waste recycling robot for nails and screws: Computer vision technology and neural network approach. Autom. Constr. 2019, 97, 220–228. [Google Scholar] [CrossRef]
  233. Luo, H.; Wang, M.; Wong, P.K.Y.; Cheng, J.C.P. Full body pose estimation of construction equipment using computer vision and deep learning techniques. Autom. Constr. 2020, 110, 103016. [Google Scholar] [CrossRef]
  234. Lee, M.F.R.; Chien, T.W. Intelligent robot for worker safety surveillance: Deep learning perception and visual navigation. In Proceedings of the 2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, China, 19–21 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  235. Wild, W. Application of infrared thermography in civil engineering. Proc. Estonian Acad.Sci. Eng. 2007, 13, 436–444. [Google Scholar] [CrossRef]
  236. Antonopoulos, V.Z. Water movement and heat transfer simulations in a soil under ryegrass. Biosyst. Eng. 2006, 95, 127–138. [Google Scholar] [CrossRef]
  237. Al-Karawi, J.; Schmidt, J. Application of infrared thermography to the analysis of welding processes. In Proceedings of the 7th International Conference on Quantitative InfraRed Thermography, Belgium, Brussels, Belgium, 5–8 July 2004; Von Karman Institute: Sint-Genesius-Rode, Belgium, 2004; pp. 1–6. [Google Scholar]
  238. Jadin, M.S.; Taib, S. Recent progress in diagnosing the reliability of electrical equipment by using infrared thermography. Infrared Phys. Technol. 2012, 55, 236–245. [Google Scholar] [CrossRef] [Green Version]
  239. Johnson, E.J.; Hyer, P.V.; Culotta, P.W.; Clark, I.O. Evaluation of infrared thermography as a diagnostic tool in CVD applications. J. Cryst. Growth 1998, 187, 463–473. [Google Scholar] [CrossRef]
  240. Hittel, M.J.; Bingham, R.; Sanders, M.K. NFPA 70B recommended practice for electrical equipment maintenance 2002 edition. In Proceedings of the 8th IAS Annual Meeting on Conference Record of the Industry Applications Conference, Salt Lake City, UT, USA, 12–16 October 2003; IEEE: Piscataway, NJ, USA, 2003; pp. 1280–1284. [Google Scholar]
  241. Singh, G.; Kumar, T.C.A.; Naikan, V.N.A. Fault diagnosis of induction motor cooling system using infrared thermography. In Proceedings of the 2016 IEEE 6th International Conference on Power Systems (ICPS), New Delhi, India, 4–6 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–4. [Google Scholar]
  242. Jeffali, F.; El Kihel, B.; Nougaoui, A.; Delaunois, F. Monitoring and diagnostic misalignment of asynchronous machines by infrared thermography. J. Mater. Environ. Sci. 2015, 6, 1192–1199. [Google Scholar]
  243. Chaturvedi, D.K.; Iqbal, M.S.; Singh, M.P. Intelligent health monitoring system for three phase induction motor using infrared thermal image. In Proceedings of the 2015 International Conference on Energy Economics and Environment (ICEEE), Greater Noida, India, 27–28 March 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  244. Du, X.; Feng, L.; Yang, Y.; Yang, L. Experimental study on heat transfer enhancement of wavy finned flat tube with longitudinal vortex generators. Appl. Therm. Eng. 2013, 50, 55–62. [Google Scholar] [CrossRef]
  245. Sarraf, K.; Launay, S.; El Achkar, G.; Tadrist, L. Local vs. global heat transfer and flow analysis of hydrocarbon complete condensation in plate heat exchanger based on infrared thermography. Int. J. Heat Mass Transf. 2015, 90, 878–893. [Google Scholar] [CrossRef]
  246. Ge, Z.; Du, X.; Yang, L.; Yang, Y.; Li, Y.; Jin, Y. Performance monitoring of direct air-cooled power generating unit with infrared thermography. Appl. Therm. Eng. 2011, 31, 418–424. [Google Scholar] [CrossRef]
  247. Sarraf, K.; Launay, S.; Tadrist, L. Analysis of enhanced vapor desuperheating during condensation inside a plate heat exchanger. Int. J. Therm. Sci. 2016, 105, 96–108. [Google Scholar] [CrossRef]
  248. Kanargi, B.; Tan, J.M.S.; Lee, P.S.; Yap, C. A tapered inlet/outlet flow manifold for planar, air-cooled oblique-finned heat sink. Appl. Therm. Eng. 2020, 174, 115250. [Google Scholar] [CrossRef]
  249. Li, H.Y.; Chiang, M.H. Effects of shield on thermal-fluid performance of vapor chamber heat sink. Int. J. Heat Mass Transf. 2011, 54, 1410–1419. [Google Scholar] [CrossRef]
  250. Li, H.Y.; Chao, S.M.; Tsai, G.L. Thermal performance measurement of heat sinks with confined impinging jet by infrared thermography. Int. J. Heat Mass Transf. 2005, 48, 5386–5394. [Google Scholar] [CrossRef]
  251. Xu, C.; Yang, L.; Li, L.; Du, X. Experimental study on heat transfer performance improvement of wavy finned flat tube. Appl. Therm. Eng. 2015, 85, 80–88. [Google Scholar] [CrossRef]
  252. Li, H.Y.; Chiang, M.H.; Lee, C.I.; Yang, W.J. Thermal performance of plate-fin vapor chamber heat sinks. Int. Commun. Heat Mass Transf. 2010, 37, 731–738. [Google Scholar] [CrossRef]
  253. He, R.; Xu, P.; Chen, Z.; Luo, W.; Su, Z.; Mao, J. A non-intrusive approach for fault detection and diagnosis of water distribution systems based on image sensors, audio sensors and an inspection robot. Energy Build. 2021, 243, 110967. [Google Scholar] [CrossRef]
  254. Zhou, B.; Yang, H.; Feng, W.; Jiang, Y.; Chen, Y. Self-propelled and size distribution of condensate droplets on superhydrophobic surfaces. Surf. Technol. 2020, 49, 170–176, 190. (In Chinese) [Google Scholar]
  255. Wu, J.; Ouyang, G.; Hou, P.; Xiao, H. Experimental investigation of frost formation on a parallel flow evaporator. Appl. Energy 2011, 88, 1549–1556. [Google Scholar] [CrossRef]
  256. Malik, A.N.; Khan, S.A.; Lazoglu, I. A novel demand-actuated defrost approach based on the real-time thickness of frost for the energy conservation of a refrigerator. Int. J. Refrig. 2021, 131, 168–177. [Google Scholar] [CrossRef]
  257. Yoo, J.W.; Chung, Y.; Kim, G.T.; Song, C.W.; Yoon, P.H.; Sa, Y.C.; Kim, M.S. Determination of defrosting start time in an air-to-air heat pump system by frost volume calculation method. Int. J. Refrig. 2018, 96, 169–178. [Google Scholar] [CrossRef]
  258. Zheng, X.; Shi, R.; You, S.; Han, Y.; Shi, K. Experimental study of defrosting control method based on image processing technology for air source heat pumps. Sustain. Cities Soci. 2019, 51, 101667. [Google Scholar] [CrossRef]
  259. Miao, H.; Yang, X.; Yin, D.; Zheng, W.; Zhang, H.; Zhang, S.; Liu, Z. A novel defrosting control strategy with image processing technique and fractal theory. Int. J. Refrig. 2022, 138, 259–269. [Google Scholar] [CrossRef]
  260. Li, Z.; Wang, W.; Sun, Y.; Wang, S.; Deng, S.; Lin, Y. Applying image recognition to frost built-up detection in air source heat pumps. Energy 2021, 233, 121004. [Google Scholar] [CrossRef]
  261. Wang, S.; Wang, W.; Sun, Y.; Li, Z. Research on image recognition frost measurement technology for air-source heat pumps based on light adaptation. Heat. Vent. Air Cond. 2022, 52, 68, 113–117. (In Chinese) [Google Scholar]
  262. Smith, S.T.; Hanby, V.I.; Harpham, C. A probabilistic analysis of the future potential of evaporative cooling systems in a temperate climate. Energy Build. 2011, 43, 507–516. [Google Scholar] [CrossRef]
  263. Campaniço, H.; Soares, P.M.M.; Hollmuller, P.; Cardoso, R.M. Climatic cooling potential and building cooling demand savings: High resolution spatiotemporal analysis of direct ventilation and evaporative cooling for the Iberian Peninsula. Renew. Energy 2016, 85, 766–776. [Google Scholar] [CrossRef] [Green Version]
  264. Ahmad, A.; Rehman, S.; Al-Hadhrami, L.M. Performance evaluation of an indirect evaporative cooler under controlled environmental conditions. Energy Build. 2013, 62, 278–285. [Google Scholar] [CrossRef]
  265. Shi, W.; Min, Y.; Chen, Y.; Yang, H. Development of a three-dimensional numerical model of indirect evaporative cooler incorporating with air dehumidification. Int. J. Heat Mass Transf. 2022, 185, 122316. [Google Scholar] [CrossRef]
  266. Chen, Y.; Luo, Y.; Yang, H. A simplified analytical model for indirect evaporative cooling considering condensation from fresh air: Development and application. Energy Build. 2015, 108, 387–400. [Google Scholar] [CrossRef]
  267. Chen, Y.; Yang, H.; Luo, Y. Indirect evaporative cooler considering condensation from primary air: Model development and parameter analysis. Build. Environ. 2016, 95, 330–345. [Google Scholar] [CrossRef]
  268. Meng, D.; Lv, J.; Chen, Y.; Li, H.; Ma, X. Visualized experimental investigation on cross-flow indirect evaporative cooler with condensation. Appl. Therm. Eng. 2018, 145, 165–173. [Google Scholar] [CrossRef]
  269. Min, Y.; Chen, Y.; Yang, H.; Guo, C. Characteristics of primary air condensation in indirect evaporative cooler: Theoretical analysis and visualized validation. Build. Environ. 2020, 174, 106783. [Google Scholar] [CrossRef]
  270. You, Y.; Wang, G.; Yang, B.; Guo, C.; Ma, Y.; Cheng, B. Study on heat transfer characteristics of indirect evaporative cooling system based on secondary side hydrophilic. Energy Build. 2022, 257, 111704. [Google Scholar] [CrossRef]
  271. Min, Y.; Shi, W.; Shen, B.; Chen, Y.; Yang, H. Enhancing the cooling and dehumidification performance of indirect evaporative cooler by hydrophobic-coated primary air channels. Int. J. Heat Mass Transf. 2021, 179, 121733. [Google Scholar] [CrossRef]
  272. Damoulakis, G.; Gukeh, M.J.; Moitra, S.; Megaridis, C.M. Quantifying steam dropwise condensation heat transfer via experiment, computer vision and machine learning algorithms. In Proceedings of the 2021 20th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (iTherm), San Diego, CA, USA, 1–4 June 2021; pp. 1015–1023. [Google Scholar]
  273. Suh, Y.; Lee, J.; Simadiris, P.; Yan, X.; Sett, S.; Li, L.; Rabbi, K.F.; Miljkovic, N.; Won, Y. A deep learning perspective on dropwise condensation. Adv. Sci. 2021, 8, 2101794. [Google Scholar] [CrossRef] [PubMed]
  274. Khodakarami, S.; Fazle Rabbi, K.; Suh, Y.; Won, Y.; Miljkovic, N. Machine learning enabled condensation heat transfer measurement. Int. J. Heat Mass Transf. 2022, 194, 123016. [Google Scholar] [CrossRef]
Figure 1. The logical framework for the research.
Figure 1. The logical framework for the research.
Sensors 23 06186 g001
Figure 3. Integrated system of Euler video amplification technology and air supply end conditioning device [38].
Figure 3. Integrated system of Euler video amplification technology and air supply end conditioning device [38].
Sensors 23 06186 g003
Figure 4. Non-contact thermal comfort measurement in practice [40].
Figure 4. Non-contact thermal comfort measurement in practice [40].
Sensors 23 06186 g004
Figure 5. Hand images processed by EVM algorithm [40].
Figure 5. Hand images processed by EVM algorithm [40].
Sensors 23 06186 g005
Figure 6. Non-contact thermal comfort measurement in practice [54].
Figure 6. Non-contact thermal comfort measurement in practice [54].
Sensors 23 06186 g006
Figure 7. Human pose recognition based on human skeleton keypoints model [58].
Figure 7. Human pose recognition based on human skeleton keypoints model [58].
Sensors 23 06186 g007
Figure 8. New vision-based non-contact human sleep thermal comfort detection.
Figure 8. New vision-based non-contact human sleep thermal comfort detection.
Sensors 23 06186 g008
Figure 9. Non-contact measurement in a multi-purpose lecture hall [69].
Figure 9. Non-contact measurement in a multi-purpose lecture hall [69].
Sensors 23 06186 g009
Figure 10. Non-contact automation control process of micro-environment air supply device.
Figure 10. Non-contact automation control process of micro-environment air supply device.
Sensors 23 06186 g010
Figure 11. Methodological framework for collection of different types of data to measure pedestrian thermal comfort.
Figure 11. Methodological framework for collection of different types of data to measure pedestrian thermal comfort.
Sensors 23 06186 g011
Figure 12. Top view of the MaRTiny device. Jetson Nano with cooling fan, camera, and WiFi module. Arduino board connected to different weather sensors [99].
Figure 12. Top view of the MaRTiny device. Jetson Nano with cooling fan, camera, and WiFi module. Arduino board connected to different weather sensors [99].
Sensors 23 06186 g012
Figure 13. Thermal image of ground surface temperature (LST) in Shenyang, China, in 2020 [101].
Figure 13. Thermal image of ground surface temperature (LST) in Shenyang, China, in 2020 [101].
Sensors 23 06186 g013
Figure 14. Visible light remote sensing images.
Figure 14. Visible light remote sensing images.
Sensors 23 06186 g014
Figure 15. Street view images.
Figure 15. Street view images.
Sensors 23 06186 g015
Figure 16. Multiple condition monitoring applications for TIR.
Figure 16. Multiple condition monitoring applications for TIR.
Sensors 23 06186 g016
Figure 17. Detection and diagnosis framework based on robotic inspections [253].
Figure 17. Detection and diagnosis framework based on robotic inspections [253].
Sensors 23 06186 g017
Figure 18. Image processing of frost.
Figure 18. Image processing of frost.
Sensors 23 06186 g018
Figure 19. The schematic diagram of the IEC system.
Figure 19. The schematic diagram of the IEC system.
Sensors 23 06186 g019
Figure 20. IEC heat and mass transfer mechanism diagram.
Figure 20. IEC heat and mass transfer mechanism diagram.
Sensors 23 06186 g020
Figure 21. Visualization lab bench [269]. (a) Photo; (b) 3D schematic diagram.
Figure 21. Visualization lab bench [269]. (a) Photo; (b) 3D schematic diagram.
Sensors 23 06186 g021
Figure 22. Condensed image processing [269].
Figure 22. Condensed image processing [269].
Sensors 23 06186 g022
Table 1. Keywords in three areas.
Table 1. Keywords in three areas.
Indoor Environment
Monitoring (2013–2023)
Outdoor Environment
Monitoring (2013–2023)
HVAC Equipment
Monitoring (2003–2023)
computer vision
infrared thermal imaging
video image processing
occupant behavior
physiological parameter
hot pose
cold pose
non-contact measurement
thermal comfort
outdoor thermal comfort
pedestrian thermal comfort
street view image
street view photographs
street-level imager
artificial intelligence
computer vision
visual analytics
behavior patterns
sky view factor
greenway planning
urban morphology
urban spatial indicators
urban environment
urban facade color
new urban data
construction sites
construction equipment
monitoring
robot/robotics
visual object detection
infrared thermography
infrared thermal imaging
equipment health
heat exchangers
refrigeration
fin
monitoring
fault diagnosis and detection
non-contact measurement
image
robot
frosting/frost
condensation
automatic observation
image processing
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, B.; Yang, S.; Zhu, X.; Qi, M.; Li, H.; Lv, Z.; Cheng, X.; Wang, F. Computer Vision Technology for Monitoring of Indoor and Outdoor Environments and HVAC Equipment: A Review. Sensors 2023, 23, 6186. https://doi.org/10.3390/s23136186

AMA Style

Yang B, Yang S, Zhu X, Qi M, Li H, Lv Z, Cheng X, Wang F. Computer Vision Technology for Monitoring of Indoor and Outdoor Environments and HVAC Equipment: A Review. Sensors. 2023; 23(13):6186. https://doi.org/10.3390/s23136186

Chicago/Turabian Style

Yang, Bin, Shuang Yang, Xin Zhu, Min Qi, He Li, Zhihan Lv, Xiaogang Cheng, and Faming Wang. 2023. "Computer Vision Technology for Monitoring of Indoor and Outdoor Environments and HVAC Equipment: A Review" Sensors 23, no. 13: 6186. https://doi.org/10.3390/s23136186

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop