Keywords

1 Introduction

In recent years there has been a marked growth in the use of digital tools in healthcare. In particular there is increased interest in using digital technologies in the domain of mental health and one specific area of interest is public awareness campaigns aimed at improving health education and reducing stigma. The clinical area of interest for this project is Attention Deficit Hyperactivity Disorder (ADHD), a neurodevelopmental condition that is characterised by three core behaviours: inattention, hyperactivity and impulsivity. It is typically thought that around 3–5 % of school aged children have ADHD, with lifetime persistence for the majority. Research has found public uncertainty about the validity of ADHD as a diagnosis and scepticism towards ADHD treatment which could impact on access to, and engagement with, appropriate diagnosis and treatment.

Networked Urban Screens (interconnected multimedia displays) are a technology that offers new possibilities for social interactions in public spaces, with potential for use in public health campaigns. With the aim of realising an engaging experience, an information video about ADHD was combined with a custom, browser-based video game and deployed on an existing networked displays research platform, Screens in the Wild (SitW) [1, 2]. The platform consists of large 46-in. touchscreen or otherwise interactive displays running a web browser, a camera, a microphone and a speaker, currently deployed at four urban locations in England (Fig. 1a). The screens, two in Nottingham and two in London, England, are networked together via a central server. The networking allows users to see video streams of other users at remote locations and for local browser applications to share data.

Fig. 1.
figure 1

(a) Screens in the wild (SitW) public display platform, photograph of one location (b) Resource pack ‘making sense of adult ADHD’, Nottinghamshire healthcare NHS trust.

The video content was a shortened (2 min) version of a film originally produced to accompany a resource pack ‘Making Sense of Adult ADHD’ that was produced in 2013 by Nottinghamshire Healthcare NHS Trust, aimed at educating patients and healthcare professionals about ADHD in adults (Fig. 1b). In the video, patients who had been treated by the Trust speak about their experiences of living with ADHD and the benefit of having a diagnosis. The game ‘Attention Grabber’ was developed from an initial proof-of-concept by the authors [3], then a commercial graphics design company and web designer were contracted to implement a polished version with attractive graphics and animations (Fig. 2). The game is based on a psychometric reaction time and impulse control test for ADHD with the addition of game elements. The adapted test presents stimuli on-screen for approximately 2 min, where players are instructed to watch a sequence of different fruit but to touch only one type, bananas, and ignore the rest. Scores are awarded as the game progresses dependant on reaction time and penalties are given for incorrect or missed selections. During the game, text feedback is provided to players across the top of the screen such as “Lightning!! +1125”, “Not bad +700”, “Missed it! −200” or “Wrong fruit! −100”. The player’s score is compared to the location’s high score at the end and the local high score is uploaded to the server to be shared amongst the four locations. Further work by the project team was carried out to fully integrate the game with the SitW platform.

Fig. 2.
figure 2

Design of attention grabber game (a) Final home page layout with high scores and webcam feeds at bottom (b) Example of game play on the touchscreen.

Examples of awareness initiatives for mental health that use game elements are few [3], but where these have been implemented these have not been evaluated explicitly. In order to integrate evaluation into this application, four questions were embedded in the application: three questions (on a 4 point scale) before the video and one after. Data from game plays, video viewing and answers to the questions were captured on the SitW network server and subsequently analysed after the installation had run for 5 months. The technical design aspects of the installation will first be described in detail, including the testing process, followed by the results, analysis and discussion.

2 Technology and Data Capture Overview

2.1 The ‘Screens in the Wild’ Network

The ‘Screens in the Wild’ (SitW) Network is an output from research into Urban Screens and their potential applications for communities and culture [1, 2]. A novel feature of SitW is that it moves the state-of-the-art from standalone Urban Screens, as frequently seen in many city centre squares, into Networked Urban Screens where there is a shared interaction between screens and their content [46]. The SitW network consists of a set of urban screen nodes (currently numbering four), based in two cities in England, UK: Nottingham (Broadway cinema, BW, and New Art Exchange, contemporary visual arts space, NA) and London (Walthamstow ‘The Mill’ community space, WA, and Leytonstone public library, LE, later moved to Edgware Rd. ‘Church Street neighbourhood centre’, ER, due to refurbishment of LE during the period of the project). All the nodes featured a touchscreen attached to the glass of an outside window. NA was augmented by a touch-pad on the right of the screen due to the lack of transfer of touch through the glazing particular to that site.

The key functions of the SitW platform, Attention Grabber web-app and video application are now described. Table 1 details the components found at each screen node in the SitW Network (some available via GitHub [7]).

Table 1. The components used in each in SitW screen node

Figure 3 shows the system architecture for the SitW system. Conceptually, SitW consists of four layers: Screen Client Layer, Middleware Layer, Administration Layer and Data Store Layer. The SitW system architecture uses UNION, an off-the-shelf client/server infrastructure to provide real-time multi-user functionality for applications across the screen network, henceforth called the Interaction Server [8].

Fig. 3.
figure 3

SitW system architecture

2.2 Attention Grabber Web App and Server Interactions

The game Attention Grabber [4] is a web app written in HTML5/JavaScript and designed to run in either the Chrome or Firefox browser (full-screen mode). Implemented on SitW, the app is based on a single URL, with all game play screens and their visual components and behaviours being programmatically hidden or revealed as required. From the user perspective, this generates a game play sequence of: the game, three initial questions (Q1, Q2 and Q3), an informational video about ADHD and a final question (Q4). Animated features to attract player to the home screen included flashing fruit and horizontally wiping text “Are you paying Attention?” etc.

SitW functionality was added to the original web app using the SitW Experience Template (planned availability via GitHub [7]). This is a single JavaScript file (sitw.js) that can be included in a JavaScript-based web app to provide UNION Interaction Server functionality [8]. Video playback in Attention Grabber was provided by the SitW component ScreenBase-Simple-Video (planned availability via GitHub), an embeddable version of the ScreenBase-Video-Player (available via GitHub). This provides a basic, customisable MP4 video player, with controls and layout suitable for use on a SitW screen node. This is implemented in JavaScript and is readily embeddable into other JavaScript-based web applications.

Location awareness of any instance of Attention Grabber, running on a specific SitW screen node, may be uniquely identified by a two-digit location code, passed as a parameter in the content URL. The SitW Screen Content Schedule File (hosted on the Web Content Server) provides the appropriate URL to each screen via the SitW Client Software. Therefore, each local instance of Attention Grabber knows where it is located when communicating with the Interaction Server.

Attention Grabber’s awareness of screen location is also used to update the four game high scores, as displayed on the game interface. It can briefly be noted that Networked Urban Screens may have multiple modes within which content experiences are shared by the user. Two of the fundamental modes are: synchronous (the content is shared in real-time, similar to an online multi-player action game) or asynchronous (the content is shared ‘as and when’, similar to social networks). Attention Grabber is asynchronous in nature because it is possible to play the game (even competitively) on your own without players being simultaneously present at the other screens. The high score feature makes the experience shared, even though game play may occur in a chronologically staggered manner. The data sharing is made possible by the SitW Interaction Server. When each instance of Attention Grabber is loaded by the web browser, it connects to the Interaction Server and registers for relevant Interaction Events. It is provided with a ‘snapshot’ of the current high scores (for each of the four screens) and these are immediately displayed. If a user at any of the screens generates a score that exceeds the current local high score, this improved score is shared with all connected instances of Attention Grabber and they immediately update the appropriate high score. The high scores are saved by the Interaction Server as ‘persistent attributes’. This means that they will survive not only a disconnection of all Attention Grabber clients from the Interaction Server, but also that they persist following a restart of the server. User responses to Attention Grabber’s questions are stored in a database, using server-side PHP coding and MySQL. Table 2 show the fields captured to the database for each user session.

Table 2. Data fields captured during each session

A session (or part session) is saved as a database record in one of the following situations: when any Attention Grabber ‘Restart’ button is pressed; any screen inactivity timeout is triggered (indicating that the user has abandoned at some stage); the ‘Finish’ button is pressed. Since abandonment timeouts can trigger the generation of a part-record, examples causes of part-records in the data might include: The game has been played, but the user has then abandoned before the questions and video; the user leaves at any point during the questions; the user leaves during video playback. Therefore, only a touch on Attention Grabber’s ‘Finish’ button will generate a record that has all of the required data. Attention Grabber was also implemented with backend administration web page, password protected and available only to the research team. From this location, it was possible to review all collected questionnaire data and to download data as a CSV (Comma Separated Value) file.

2.3 Testing

Application usability and playability was tested in 3 phases. First, iterative usability and play testing on local PCs (involving the entire project team) was conducted from June 2014 when the graphics and web designer were contracted until handover back to the project team two months later. Most changes were made by the web design contractor and some additional refinements after handover. Changes included: modifying wording of questions, textual information on the game screen and names of graphical buttons; replacing all radio buttons with graphical buttons; addition of the “Restart” buttons and a ‘Next’ button on each question to allow confirmation of the selected answers to questions; adding a “Play Again” feature; reduced presentation period but increased visible/invisible ratio within period for each stimulus; adjustment of timeouts; adjustment of the score/penalty balance.

Second, field testing was conducted at the two Nottingham locations (BW and NA) using heuristic evaluation with two of the team at the same location separately (one playing, one noting errors) with a third member of the team monitoring the server, contactable by mobile phone to perform a remote reset in the case of errors. High score and question data upload to the server was also tested from each site.

Third, further field tests were conducted between the two Nottingham locations to test the transfer of high scores from one site to other after play (via the server). These tests involved the same three members of the project team through multiple plays at both locations and with the remote server monitor, all with mobile phones. It was also possible to check in an adhoc manner evidence of game plays by the public at the other sites by virtue of the high scores changing and to further check data collection on the server-side database of the question answers and score data.

Phase one iterative usability and play testing was completed over two months from June 2014. The second and third phases were conducted over one month with final testing one week before the official launch on 4th September 2014.

3 Results

Following the day of the launch, data collection continued for 5 months from 5th September 2014 to 4th Febuary 2015, with the application running and playable for 5 h a day in the morning (1½ h, 8:00–9:30), late afternoon (2½ h, 15:30–18:00) and evening (1 h, 20:00–21:00) except for location NA, which was accessible to users for 3 h a day (1 h on Sundays) due to shutters on the window at other times. Outside these times the platform was running other SitW applications. In addition one in London screen, LE, was moved to the ER location with a downtime of one month. At all locations there was a short break during the holiday period where the server was not being monitored and so applications were not deployed. This resulted in a maximum total playable period of 670 h over 134 days each for BW and WA, 369 h over 134 days for NA, 155 h over 31 days for LE and 310 h over 62 days for ER after the screen had been moved from LE.

Over the total playable time of 2174 h, a total of 781 plays were recorded after cleaning the data to remove duplicate database entries due to restarts/time-outs. On average a game was played every 2.78 h, or around two per day in the playable period. Of these 194 (25 %) plays included at least one question having been answered, the rest being game plays without any answers (and by implication no watching of the video). Also, from the total of 194, 142 players (73 %) answered question Q4 which implies that the video was also watched to the end. Table 3 shows the distribution of scores at the 5 locations showing there were few outliers (only in LE for 2.5SD and none for 3SD). Some variation in scores is apparent between sites. The highest mean and maximum scores were at BW. The lowest mean and max scores were at NA which is likely to have been a result of the different user interface (separate touch-pad).

Table 3. Score results from game plays, where embedded question(s) were also answered

Table 4 shows the results from the embedded questions after data cleaning. It can be seen from the answers to Q1 (N = 194) that 85 % players agreed playing the game had made them think about their attention span: 39 % rating ‘a lot’, 17 % ‘some’, 29 % ‘a little’ and 15 % ‘not at all’. For Q2 (N = 174) 73 % of players knew something about ADHD: 32 % rating ‘a lot’, 24 % ‘some’, 17 % ‘a little’ and 27 % ‘none at all’. For Q3 (N = 157) 71 % of players were aware of Adult ADHD: 41 % rating ‘a lot’, 17 % ‘some’, 13 % ‘a little’ and 29 % ‘not at all’. For Q4 (N = 143) most (93 %) players agreed that their knowledge had increased from watching the video with 41 % rating ‘a lot’, 15 % ‘some’, 37 % ‘a little’ and 7 % ‘not at all’.

Table 4. Results from the embedded questions

Results were subsequently analysed using IBM SPSS software to investigate any statistical differences between locations and to test correlations with score. ANOVA tests on answers to Q1, Q2, Q3 and Q4 versus location showed significant differences (p < 0.05) for all locations. Post hoc tests showed these differences were mostly due to locations ER and NA however there were some other differences (e.g. at LE and the other locations) which are also suggested from looking the mean ratings in Table 4. Since location NA has a different (touch-pad) interface this may have been a factor but it is difficult to theorise much more since differences may have been due to a number of factors including other technical issues (especially since it was not possible to test the game at ER after the move), characteristics of the location (e.g. library versus cinema) or player population differences (e.g. age or other demographics).

Within-subject correlations between answers to questions were also investigated. This analysis proved more insightful since there was a strong positive correlation between answers to Q2 and Q3 (Pearson Correlation coefficient 0.811, p < 0.001) and between Q2 and Q4 (Pearson Correlation coefficient 0.516, p < 0.001) and moderate positive correlation between Q3 and Q4 ((Pearson Correlation coefficient 0.433, p < 0.001). These relationships were maintained for all location sub-groups (relevant for locations where there were enough answers to a question). Differences of ratings between Q2 and Q3 were investigated using paired samples t-tests but only one significant result was found with a mean difference Q2–Q3 of 0.24 at location WA (p < 0.05). Overall, knowledge of ADHD and awareness of Adult ADHD were similarly rated and positively correlated to increase in knowledge. Correlation of Q1 (and other Qs) with score was investigated but no relationship was found.

4 Discussion and Conclusions

The implementation of a mental health awareness vehicle using a browser based game and educational video was successfully realised within 3 months on an existing networked public displays platform, Screens in the Wild. The Attention Grabber game appeared to be stable and playable and the SitW platform performed well for the purpose of delivering the multimedia content and to facilitate a degree of connectedness between players, through the web-cam feeds and score sharing, although the impact of the latter on the study results was not investigated. We are not aware of any public health campaign that has attempted to gather data by the use of questions embedded in a urban screen multimedia application, at least in the domain of mental health. Nearly 800 plays were recorded in the 5-month period, indicating it is possible to engage the passing public with this approach.

Results indicate evidence of good public engagement with the application since 25 % of players who took the time to finish the game were prepared to answer the embedded questions afterwards, and most (73 %) of those who answered the first three questions immediately following the game also watched the video to the end and answered the final question to finish the experience. This shows it is possible to gather data even in an unsupervised setting once the audience is engaged, although it is not possible to say how much the game was responsible for this other that it provided an attractive means to initiate engagement. We cannot say much about the majority who played the game without answering the questions (e.g. was this due to multiple plays by the same player or did these players not wish to answer questions?) but it is also clear that the game was of interest on its own aside from the rest of the installation.

Responses to the embedded questions provide insights into prior knowledge of ADHD and the learning achieved through engagement with the application. The game was moderately successful in prompting people to think about their own attention span (with 39 % rating ‘a lot’). We had not anticipated existing awareness of ADHD impacting into adulthood to be so high (with 41 % rating ‘a lot’). Nevertheless, results indicate further increase in knowledge through this campaign since most (93 %) of the participants who progressed to the end of the video presentation, featuring adults who had been diagnosed with ADHD, agreed it had increased their knowledge. Therefore, when engaging with the entire content, Attention Grabber can be considered a successful approach to public health education and awareness. Differences across the locations in the answers to the embedded questions are not easy to explain and could be varied. Furthermore there was also no relationship found between the game scores and the question answers which was quite surprising, at least for Q1, although on the other hand being asked to ‘think about’ attention span is perhaps neutral with respect to performance and a different wording could have been used, in hindsight.

This limited ability to interpret results is one of the trade-offs from using technology in public where people are free to use it how they like and where information about individuals is not available to researchers (at least, without reviewing hours of webcam material to obtain visual clues e.g. multiple plays by the same person). Whilst further questions could have been added to gain information about individuals it would have increased the burden and time taken to complete the task, and public participants are free to walk away at any time if they become bored or just decide to carry on their day, going back to what they were doing before being attracted to the display. Other possible solutions could include spot surveys at the locations or inviting players to answer additional questions off-line (e.g. via a QR Code that players could follow to a web page via their mobile phone), but it is likely that either approach would only yield data for a small proportion of the sample.

Overall, our findings are promising in relation to the potential for utilising Networked Urban Screens for the purpose of public health awareness, specifically mental health awareness. This suggests merit in further research to develop both the methods for engagement and evaluation to add to this evidence.