Skip to main content

Using augmented reality for supporting information exchange in teams from the security domain

Abstract

The performance of operational teams in the security domain relies on quick and adequate exchange of context-related information. Currently, this information exchange is mainly based on oral communication. This paper reports on different scenarios from the security domain in which augmented reality (AR) techniques are used to support such information exchange. The scenarios have been elicited using an end-user centred design approach. To support these scenarios, an AR environment has been developed and the usability of the AR support has been evaluated with experts from different operational units in the security domain. The first evaluation shows that the scenarios are well defined and the AR environment can successfully support information exchange in teams operating in the security domain.

Background

In the security domain, operational teams rely on quick and adequate exchange of context-related information. This is important as for such teams information processing and distribution needs to happen under time pressure. Decisions or choices can normally not be undone. They are taken based on the provided information. Thereby, the available information has a high impact on the further course of operations.

Nowadays, operational teams exchange information mainly orally [1, 2]. This communication is often standardized, in order to avoid critical human mistakes [3]. Oral communication, especially under time pressure, can be understood and interpreted differently by the different team members [4]. Furthermore, there might be unequal information distribution amongst team members, as is seen in other crisis scenario’s [5]. As result, wrong decisions or choices for the continuation of an operation can be taken, leading to an unsafe situation for the operational team as well as possibly affected civilians.

Successful communication relies on a foundation of mutual knowledge or common ground [6]. Shared visual spaces facilitate and support conversational grounding [79] and thus the development of a common ground [6]. Additionally, visual information in the shared visual spaces further facilitates the creation of situational awareness, which in combination with the conversational grounding, improves collaborative task performance [6]. Situational awareness (SA) develops when individuals, involved within a certain situation, look around, gather information about the situation, make inferences, test their inferences, and draw further inferences from the results [10]. To better understand SA within teams, Endsley [10] introduces the concept of team SA which is defined as “the degree to which every team member possesses the situation awareness required for his or her responsibilities” [10]. According to Endsley and Robertson [11], successful team performance requires that individual team members have a good SA on their specific task and that good team SA is dependent on team members understanding the meaning of the exchanged information in the team.

AR systems allow users to see the real world, with virtual objects superimposed upon or composited with the real world [12, 13] where virtual objects are computer graphic objects that exist in essence or effect, but not formally or actually [14]. AR systems are not limited to the use of head-mounted devices and mainly have to combine real and virtual objects, as defined above, be interactive in real-time and to register objects within 3D [12]. AR systems can be used to establish the experience of being practically co-located by means of simulated presence. AR systems have been used to allow experts to spatially collaborate with others at any other place in the world without traveling and thereby creating the experience of being virtually co-located, e.g. in the field of crime scene investigation [15]. AR systems have also been used to increase social presence in video-based communication [16] or help in complex assembly tasks [17]. Such new approaches create new collaborative experiences. They allow distributed users to collaborate on spatial tasks and create a shared understanding [15].

We explore how Augmented Reality (AR) systems can be used to facilitate information exchange in teams and thereby foster team situational awareness (TSA). Higher levels of situational awareness are subject to better understanding of the operation situation. Thus, when remote collaboration in AR improves situational awareness, better decisions are taken in the security team. This, in turn, leads to fewer mistakes taken in time-critical security situations.

In our study, the security domain is represented by the Dutch Police and the Netherlands Forensic Institute (NFI). By using an end-user centred approach, different scenarios for using AR to information exchange with these partners have been identified [18]. An AR environment (called DECLARE) supporting these scenarios has been developed. The paper elaborates on the first findings of a usability study to explore in how far AR improves information exchange and team situational awareness.

Related work

Challenges in the field

People working in teams need to act reciprocally; they are interdependent to other team members, and share one working environment. Information about the situation within this process is of crucial importance, as it allows teams to manage the process of collaborative working, and to coordinate group or team activities [19]. Many studies show that the quality of communication or information sharing has a relation with team performance [2023]. The effectiveness of a team is often reflected by the degree in which team members engage in processes for sharing information [24], while being engaged within both verbal and non-verbal communication. Work in the safety domain is collaborative work by nature. Still, effective collaboration in this field seems to be difficult to realize. Berlin and Carlström [25] study why collaboration often is minimised at an accident scene. Based on observations and semi-structured interviews, they discover that collaboration is often considered as ideal rather than something that is really carried out. As major reasons for only limited forms of collaboration, they identify information asymmetry, uncertainty and lack of incentives. Smith et al. [26] are of the opinion that it is difficult to consider crime scene examination from a team perspective, as usually several different teams from different organisations need to work together. The work is then centred around the collection of information and evidence in consultation with different people. The work effectiveness relies very much on the efficiency of each individual team, the communication of results and the coordination among the teams.

There are some further issues analysed in police teamwork, which are related to our study. Streefkerk et al. [27] noticed that police officers often have no overview of availability and location of other team members [27, 28]. As result, police officers often do not know which of their colleagues is available to handle an incident, leading to incidents that remain unattended. Motivated by this observation they consider team awareness as the major challenge for police team tasks.

For collaboration, information sharing is crucial. Bharosa et al. [29] discuss challenges and obstacles in sharing and coordinating information during multi-agency disaster response. They consider challenges from an inter- and intra-organisational as well as an individual perspective. As major challenges they identify conflicting role structures, mismatch between goals and independent projects, focus on vertical information sharing, information overload, inability to determine what should be shared or the prioritization of own problems. Bharosa et al. [29] further identify factors to influence information sharing and coordination such as improving interaction and familiarity of other roles, knowledge of other agencies’ operations or the information and system quality. Furthermore, challenges with regard to information flow during crisis management occur [5]. Based on case studies, Militello et al. [5] identify asymmetric knowledge and experience, barriers to maintaining mutual awareness, and uneven workload distribution and disrupted communication as major challenges. For each of the challenges, different recommendations are presented. To overcome asymmetric knowledge, they suggest to provide tools for communication and train their usage. To improve mutual awareness, they propose the use of shared displays. To address the uneven workload, they suggest to more clearly assign roles and make their responsibilities known across organisations. The latter is also stressed in [30].

Augmented/virtual reality technology driven collaboration support

Some of the before-mentioned challenges of collaboration in teams have been addressed using augmented reality technology. The rest of this section discusses some of the most relevant studies in this regard.

Augmented Reality technique (AR) and Virtual Reality technique (VR) have been extensively explored through scenarios in various domains, as part of novel technologies in the field of computer assisted collaborative work. Alem et al. [31] propose ReMoTe, a remote guiding system that integrates non-mediated hand gesture communication. In ReMoTe, a remote expert assists a worker. The hands of the remote export are projected into the view of the worker to point to certain locations and to show specific manual procedures.

Stansfield [32] presents Virtual Reality (VR) applications, which are designed for nuclear safeguards and non-proliferation, with support for facility visualization, training scenarios, and security and monitoring of nuclear facilities. Stansfield et al. [33] present an early VR environment for situational training of inspectors to verify compliance with nuclear non-proliferation treaties. The system monitors the actions of the users in the VR environment and reports when a user misses to prevent a security infraction. Although VR offers functional capabilities with easily accessible technical means, AR supports to investigate collaborative behaviour in more real work scenarios such as in the security domain.

Nilsson et al. [34] propose an AR tool to improve collaboration between actors from different organizations such as the rescue services, the police and military personnel in a crisis management scenario while in the same time sustaining the individual needs.

Gu et al. [35] conduct a study on the impact of 3D virtual representations and the use of tangible user interfaces as support for synchronous design collaboration using AR technology. The results indicate that the change from a physically co-located working environment to a virtual co-located scenario encourages the AR users to smoothly move between working on the same tasks and working on different tasks or different aspects of the design process.

Often distributed collaboration by AR has been studied in comparison with physically co-located environments through face-to-face scenarios. Schnier et al. [36] focus on studying the issues around establishing the joint attention toward the same object or referent in a physically co-located collaborative AR environment. The study reveals the difficulties in coordinating participants’ foci of attention and the necessity of constantly being attentive to the lack of the co-participant’s reaction while preparing the personal contributions in the collaborative scenario. Yabuki et al. [37] present a system in the early phase of development, aiming at supporting the collaboration between people working outdoor on environmental issues.

Datcu et al. [38, 39] propose an AR-based collaborative environment to study the effect of AR when supporting complex problem solving between physically co-located and virtually co-located participants. Within the environment, the goal of jointly building a tower of coloured blocks represents an approximation of a shared task. Individual expertise is modelled as the possibility to move blocks of a distinct color and shared expertise is modelled by the possibility of all players to move blocks of a same colour.

The above discussion provides several examples for the use of AR to support collaboration among users in various domains. The examples vary in several aspects. Users are either physically or virtually co-located. They use free hand or tangible interaction with physical objects. In some cases, users are static. In others, users are mobile. Finally, some examples make use of HMDs while others rely on different visualization devices. However, until now the usability of AR systems for information exchange within teams working in the security domain has not properly been explored. The current paper focuses on the applicability of the AR techniques for complex collaborative environments in the security domain. The study involves a continuous series of parallel actions by players of multiple teams, that demand for coordination not only in parallel within a given time interval but also over time, between subsequent series of actions.

In the following, the paper describes the design of 3 different AR scenarios in the security domain. The scenarios are then used to study the usability of AR for information exchange in security teams.

Usability study

Scenario design

We used the triadic game design (TGD) philosophy as the leading design approach to the scenarios. The TGD philosophy [40] is an end-user oriented design approach distinguishing three equally important game components: Play, meaning, and reality. The philosophy states that an efficient game should allow a valid, meaningful and engaging experience. For our purpose, we transfer the TGD philosophy to the development of engaging scenarios, which can be ‘played’ by the participants, in order to test the use of the technology. Hereby, the play aspect refers to elements, which support engagement and motivation for the exercise.

The meaning aspect addresses the purpose of the exercise. The reality aspect refers to the degree of realism within the scenarios, e.g. how realistic the tasks within the scenario are compared to real work of the police agents. The philosophy also discusses possible dilemmas between the three aspects, which we will not further discuss at this point, but which we are very aware of. For example, the effort to develop a highly realistic scenario, which represents the actual communication structure of a task in real life, could hinder the playability of the scenario, as players would have to communicate in a different way in order to be fully engaged in the exercise. Earlier design experiences with operational units in the security domain [41], have shown that by using TGD playful, meaningful and realistic scenarios can be identified. We also conceptualize the AR scenarios as games, as they show many characteristics of simulation games, like different roles, specific rules, and especially technical resources [42]. They are to be further developed to mature simulation training games.

As a first step, we developed scenarios for the tests, based on the TGD philosophy, together with experts from the field.

During a half-day workshop in which 12 members of 4 different operational units participated, 3 different scenarios have been identified (Fig. 1). The TGD philosophy has been used as a guideline for the workshop. The three elements play (P), meaning (M), and reality (R), have been addressed while defining the AR scenarios. For doing so, we held pre-structured focus group interviews with the 12 experts along the three aspects. We started the discussion with the reality aspect, moved on to the meaning aspect, and, as last point, discussed requirements for the playability of the scenarios. Two researchers observed the discussion and took notes, which were compared afterwards and used for the development of 3 scenarios. In all 3 scenarios, AR is used to establish virtual co-location. Virtual co-location entails that people are virtually present at any place of the world and interact with others that are physically present in another location by using AR techniques. The following sections describe the identified scenarios.

Fig. 1
figure 1

Example of virtual co-location scenarios of one local worker (field officer investigating domestic violence—top left, policeman in the VIP protection scenario—top right and forensic investigator—second row) wearing an HMD for AR. Field officers are supported from the distance by remote experts using the AR desktop interface (third row)

VIP protection

A policeman, equipped with a head mounted device (HMD) investigates a safe house in which a witness needs to be safely accommodated (R). This policeman shares the local view as recorded from a camera in the HMD with a remote colleague (P). While the local policeman investigates the safe house, the remote colleague has the task to highlight suspect objects in the house and point out possible emergency exits by augmenting the view of the local policeman. Augmentation can be performed by placing geometric shapes, text or arrows in 3D. The local policeman has to support the investigation of the house (M). For the scenario, the training location needs to be prepared with suspicious objects, e.g. a suitcase that can be identified. Audio communication among the policemen needs to be established (R).

Forensic investigation

A forensic investigator arrives at a severe crime scene. Wearing an HMD, the investigator shares the local view with a remote colleague (R). The remote colleague has the task to point the local colleague to possible evidence, take pictures of evidence, support the preparation of 3D laser scans, and mark areas at the scene that are to be avoided. For that purpose, the remote colleague can augment the view of the local investigator with virtual laser scanning stickers, text, resizable geometric shapes, arrows, and text (P). During the scenario, the local investigator has the task to replace the virtual laser scanning stickers with real ones, stay clear of marked areas and support the remote colleague in investigating the scene (M). For the scenario, the training location needs to be prepared with mock-up blood patterns, mock-up evidence, e.g. a gun or knife, as well as evidence that is to be avoided, e.g. a mock-up dead body. Furthermore, it is necessary to establish an audio communication among the investigators (R).

Domestic violence

A team of 2 policemen arrives at a scene of domestic violence (R). One of the policemen wears an HMD and shares the local view with a remote colleague.

The design of this scenario is adapted to the technical limitation of the current version of the AR-based remote collaboration system, supporting only one local policeman. Future work will focus on extended studies of AR to support information exchange in security teams that include AR support for multiple local policemen.

The remote colleague can provide instructions, provide information on the case and present persons, take pictures and highlight possible evidence. For that purpose, the remote colleague can augment the view of the local policeman with virtual index cards showing the necessary information, resizable geometric shapes, arrows, and text. For the index cards, the remote policeman can indicate different urgency levels by surrounding the index cards with either a green, yellow or red frame (P). The local policeman wearing the HMD needs to talk to present persons, follow the instructions, support the remote colleague in investigating the scene and orally share the received information with the second local colleague (M). For the scenario, the training location needs to be prepared with possible evidence, such as a broken vase, knife or gun. Additionally, two actors need to play the case of domestic violence and an audio communication among the policemen needs to be established (R).

Participants

11 policemen and inspectors (including one woman) from 3 national Dutch security institutions participated in the usability study, playing roles in the 3 scenarios that have been introduced in sub-section “Scenario design”: VIP protection, crime investigation, and domestic violence. 4 of the participants were involved in the design of the scenarios. The rest of the participants were chosen at random and their availability on the day of the experiment. None of the participants had used our AR system before.

Measures

The usability of the AR system with regard to the information exchange needs to consider several aspects (see Table 1).

Table 1 AR aspects (per scenario) to explore, with regard to the usability of the AR for information exchange

Throughout the three scenarios, the exchange of information is bi-directional. The remote user can directly visualize the video stream from the camera of the local user (see item 2a in Table 1). In turn, the remote user can make annotations which are displayed in the HMD view of the local (items 2b, 2c, 2d, 2e and 2f).

Remote guidance (see item 3 in Table 1) and situational awareness (see item 4 in Table 1) are considered in the study, as these factors are closely coupled with the perception of information exchange and the defined scenarios.

The technical aspects proposed to study throughout the experiment are important, as they influence the experiences of a user with regard to the information exchange in AR. The aspects considered (see items 5a, 5b, 5c and 5d in Table 1) are the hardware ergonomics, the network communication, the tracking and the rendering of the graphics.

The different aspects were evaluated with experts from different operational units in the security domain using two different measures were: (1) a questionnaire and (2) a debriefing of the participants on their experiences using the AR support. The questionnaire (see Appendix 1) considers the aspects 1–4. It consists of 16 closed and 8 open questions on the usability of the system as well as its support with regard to information exchange. The questionnaire is based on studies conducted in the game design field [43] and Endsley’s conceptualization of situational awareness [44].

Experimental set-up

Equipment

DECLARE (Distributed Collaborative Augmented Reality Environment) is a highly scalable, modular and parallel environment (see Fig. 2). DECLARE has been developed to support the virtual co-location of distributed users. Virtual co-location relies on AR [45, 46] to create spaces in which people and objects are either virtually or physically present: it allows people to engage in spatial remote collaboration. For that purpose, a video matching the current view of the local user is streamed to the remote user. Thereby, the remote user can see what the local user sees. To support spatial remote collaboration, the remote user can annotate the video stream with virtual objects that are then also shown in the user interface of the local users. To show the augmentation in the direct sight of the local user an optical see-through HMD is used (see Fig. 3). The HMD is also equipped with a camera that is used to generate the video matching the local user’s view.

Fig. 2
figure 2

Diagram of the main components of DECLARE

Fig. 3
figure 3

Optical see-through HMD–META SpaceGlasses

Local and remote user applications

DECLARE is based on a centralized architecture. It contains different applications for the local and remote user. The applications have different user interfaces which are created using the Unity game engine. The application for the local user is adapted for the optical see-through HMD as shown in Fig. 3. The application for the remote user runs on a desktop computer or laptop. Thus, the remote user gets a screen-based visualization and can interact with the system by using the keyboard and a standard mouse device.

Directions by the remote user

DECLARE supports the collaboration process by providing the remote user with tools to give directions in form of spatial annotations. The annotations appear in the view of the local user. The remote user can augment the view of the local user with the following elements:

  • 3D-aligned objects (arrow/cube/sphere),

  • 2D-screen aligned content (text, photo and video),

  • Dialog boxes for introducing text for the graphical objects and also for colours of some visual elements,

  • Screen aligned counter and text,

  • Screen aligned (flashing) panic button in form of text with frame,

  • Screen aligned image and colour border (to show info about a person), and

  • Virtual stickers (indicating the scanning area at the crime scene).

All annotations are encoded as distinct data messages and events that pass through the shared memory system. Updates are automatically sent to each software module or application (such as the software application of the local user or the software application of the remote user) by using a notification and a push system of events and data.

Local user tracking in the physical environment

The local user’s motion in the physical environment is automatically tracked by using an implementation of RDSLAM (robust dynamic simultaneously localization and mapping) [47] provided by the developers of RDSLAM. This tracking module is integrated as a module of the DECLARE framework and can run on the computer of the local user or on a separate hardware system. In addition to tracking the HMD position and orientation, RDSLAM performs a mapping of the physical environment (of the local user) and generates an internal 3D representation of the physical world. For that purpose, the RDSLAM module uses the video stream from the local user’s HMD.

The physical world is represented in terms of data structures related to key points discovered during tracking, and to recordings of camera position and orientation with respect to the key-points. Such data is mainly used internally by RDSLAM during the tracking process.

The tracking procedure identifies a set of key points (as natural visual features) from each frame of the video sequence. The estimation of the camera parameters (location and orientation) keeps track of all the key points during time. This means newly discovered key points are stored in the system memory and that already-known key points are tracked when detected.

In DECLARE, the tracking module stores all detected key-points in the current frame of the digital video sequence, the camera location and orientation data in the shared memory space. Once this data is stored on the shared memory space, all the system modules are notified on the updates so that the data can be further read, if necessary. The result of markerless tracking provides the HMD camera location and orientation while the mapping result provides a representation of the physical world in form of a sparse cloud of 3D points.

RDSLAM calibration

Prior to generating valid results, the RDSLAM module requires an initial calibration step. This is routinely done through the remote user’s application at the beginning of the work session. To proceed with the calibration, the remote user presses the SPACE key once to start the process and a second time to apply the settings. The time between the two events is generally short, about 2–3 s.

The calibration requires moving the camera of the HMD horizontally from left to right. Because the localisation and mapping process relies only on computer vision based algorithms, running the calibration step in a properly lit physical environment, which provides enough visually distinguishable patterns, is a crucial task.

The calibration process relies on sending a special event notification to the RDSLAM module via the shared memory space of DECLARE. Once the calibration process is initialized, 2D line segments are drawn on every new frame displayed in the user interface of the remote expert, showing the initial and the actual location of several key-points used for calibration (see Fig. 4). Via the shared memory space, the RDSLAM module passes the coordinates of the line segments to be drawn during the calibration to the remote user’s application.

Fig. 4
figure 4

Examples of successful (left) and unsuccessful (right) calibration

Calibration is successful when multiple horizontal parallel lines are displayed on the screen (see left example in Fig. 4), which means that the camera is moving and keeping the same distance to the objects.

Selection and positioning of 3D virtual objects

The sparse cloud of 3D points as provided by the RDSLAM module represents visual key-points, which connect the augmented world with the physical world. These visual key-points act as virtual anchors supporting the annotation of the physical world with virtual objects.

Via the user interface, the remote user can attach a virtual 3D object to such a visual key point. After choosing the type of virtual object (by selecting a specific icon) from the bar of icons on the top-left side of the user interface (see for example Fig. 7), the remote user can place the selected object by mouse-clicking on the intended location in the current frame of the video sequence.

The mouse click event returns the 2D coordinates of the mouse cursor on the laptop screen. In order to attach the selected 3D virtual object, the 2D mouse cursor coordinates are converted to 3D coordinates as provided by the RDSLAM tracking module. This conversion is done by the RDSLAM module. The mapping implies searching for the closest key point from the equivalent 3D location of the 2D point of the mouse click event generated through the user interface of the remote user. Once computed, the closest 3D point in the coordinate reference system of RDSLAM is sent back to the expert system application via the shared memory space of DECLARE. Once attached to a key point, a 3D-aligned virtual object is correctly rendered in the next video frames on the user interfaces of the local and remote user. The rendering is continuously and updated in consistency with the HMD camera motion.

Procedure

All the experiments took place indoor in a real training environment of the Dutch police in Leusden, The Netherlands (Fig. 5).

Fig. 5
figure 5

The training environment of the Dutch police in Leusden, The Netherlands

For each experiment, 2 participants in different physical locations (but in the same house—see Fig. 6), are required:

Fig. 6
figure 6

Ground floor/physical setup for the experiment sessions on VIP protection scenario, forensic investigation scenario and domestic violence scenario

  1. 1.

    The local person that wears the HMD. The HMD is connected to a laptop that the local person carries in a backpack. The laptop is connected via a wired network connection to another laptop hosting the RDSLAM module of DECLARE and the shared memory space.

  2. 2.

    The remote person that uses a laptop to run the remote user support of DECLARE. Like the laptop of the local user, this laptop is connected via a wired network connection to the laptop hosting the RDSLAM module of DECLARE and the shared memory space.

At the beginning of each experiment, RDSLAM is calibrated to initialize the tracking of the local user’s motion in 3D coordinate system. The remote person observes the video captured by the camera from the local’s HMD on a laptop, plus a menu with buttons shown in the left upper part of the screen via which virtual 2D and 3D objects can be placed.

Regarding the positioning of the virtual content, there are two types of virtual objects: (1) fixed virtual objects stay in one fixed position in the user interface (see Fig. 9) and (2) virtual objects that are linked to the key points provided by the RDSLAM module (the yellow points in Figs. 7 and 8).

Fig. 7
figure 7

Screenshot of the user interface for the remote investigator during the experiment with the VIP protection scenario

Fig. 8
figure 8

Screenshot of the user interface for the remote user during the experiment on crime investigation

In all scenarios, objects can be selected and placed by the remote user with a mouse click. A selected object may be resized (by pressing “up” or “down” arrows) or deleted (by pressing the delete key) by the remote person.

The transparent rectangle in the middle of the image (see Figs. 7, 8 and 9) represents the display area of the HMD. When virtual objects are within this area, the remote user knows that these are visible to the local user.

Fig. 9
figure 9

Screenshot of the user interface for the remote user during the experiment with the domestic violence scenario

VIP protection

The group of participants in the VIP scenario consisted of 6 people. From these, 3 participants each played different roles in two experiment sessions (1 played as scout and bodyguard, 2 played as scout and team leader). In the VIP scenario, the role of scout was played 3 times, the role of bodyguard was played 2 times, and the role of team leader was played 4 times. After each round the participants filled in a questionnaire and were debriefed on their experiences.

An experiment session in the VIP scenario, starts with the local officer located in the hallway of the safe house, and with the remote expert located in the ad hoc expertise centre (see Fig. 6).

After the initial calibration process, the local officer moves to the living room in which a witness needs to be safely accommodated, and checks for suspicions objects and events.

For the VIP scenario (see Fig. 7), the GUI for the remote user includes buttons to create 3D objects like spheres, cubes and arrows, 3D text messages and a time counter that counts down the seconds left until a certain task should be accomplished.

Forensic investigation

There were 2 participants in the Forensic Investigation scenario. Each played both roles of the remote expert and field CSI investigator, in different experiment sessions. After each round the participants filled in a questionnaire and were debriefed on their experiences.

An experiment session in the forensic investigation scenario, starts with the local forensic investigator located in the hallway, doing the initial calibration process, and with the remote expert located in the ad hoc expertise centre (Fig. 6). Then, the local investigator moves to the living room where he checks the room and the mannequin (representing the dead body), for traces. During the experiment session, the local investigator receives instructions and collaborates with the remote expert through the augmented reality interface.

In the forensic investigation scenario (see Fig. 8), the remote user was able to place 3D objects (spheres, cubes, arrows), to write 3D text messages, to place laser stickers to mark areas that should be scanned by the local user, and to take/save/load photos from the local scene.

Domestic violence

There were 3 participants (using augmented reality based technology) in the domestic violence scenario. One of them played as both police officer and as team leader. From the other 2 participants in the domestic violence scenario, one played as police officer, and one played as team leader. After each round the participants filled in a questionnaire and were debriefed on their experiences.

Next to the participants that used the AR support, there were 2 additional participants, namely the perpetrator and the aggressed wife. These participants were not questioned with respect to their experience during the experiment setup.

An experiment session in the domestic violence scenario, starts with the aggressive husband in the living room, the victim in the hallway and with two policemen in front of the door, outside (Fig. 6). One policeman wears the HMD, all the time being connected with the remote expert located in the ad hoc expertise centre. The policemen are received by the wife that also gives information with regard to the domestic violence incident. After the RDSLAM calibration in the hallway, the policemen go to the living room, check for evidence and eventually arrest the aggressive husband.

In the domestic violence scenario (see Fig. 9), the remote user was able to place 3D objects (spheres, cubes, arrows), to send 3D text messages, to display information about the persons involved in that domestic violence case and to take/save pictures using the camera of the local’s HMD.

Results

This section reports on the results from the questionnaires (see Appendix 1) as well as the debriefings.

Table 2 presents the results from the questionnaire. The rows represent the Likert items (questions 4.1–4.16) and the columns represent the scenarios (VIP, forensic and domestic violence). For each Likert item and scenario, the table displays three statistical indicators namely the median (Mdn), the 1st quartile (Q1) and the 3rd quartile (Q3). In addition, the table displays the p value for the null hypothesis that the Likert responses in each scenario comes from the same distribution, using a Kruskal–Wallis test (p value). In Table 2, the median and quartile values on the responses regarding the design and the experience of the game, follow an ordinal scale (1 = strongly agree, 2 = agree, 3 = neutral, 4 = disagree, 5 = strongly disagree). Therefore, lower values of these indicators represent positive feedback of the participants’ perception with regard to the correspondent items.

Table 2 Results (medians, the 1st quartile, the 3rd quartile) on Likert items, for each scenario (including p value of the Kruskal–Wallis one-way analysis of variance)

For each item ([4.1]–[4.16] in Table 2), given the returned value of p (p value in Table 2), there is no solid statistical evidence that Kruskal–Wallis one-way analysis of variance rejects the null hypothesis that the Likert responses from all three scenarios come from the same distribution at a 1 % significance level (p value ≥ 0.01). In line with these results, it follows that the three scenarios are not perceived differently by the participants at the experiments.

The evaluation of the answers to the open questions in the questionnaire as well as the debriefings indicate that the scenarios were clear and attractively built, with clear instructions and explanations given beforehand (supporting aspect 1 in Table 1). The location, the setup, which included weapons, real handcuffs, visual representations of blood patterns and injuries (on a mannequin in the forensic scenario) contributed to the realism of the scenarios.

Table 3 presents on short the overall findings of the usability study. In the following, we summarize the feedback of the participants per scenario.

Table 3 Overall results

VIP Protection

The participants of this scenario indicated that the objective of the scenario was clear (Mdn = 1.00, Q3 = 2.00), easy (Mdn = 2.00, Q3 = 1.00, Q3 = 4.00) and realistic enough in relation to the scenario objective (Mdn = 2.00, Q1 = 1.00, Q3 = 3.50), with clear initial instructions and explanations (Mdn = 1.00, Q3 = 3.50).

They appreciated that the scenario prepares for future assignments (Mdn = 1.50, Q1 = 1.00, Q3 = 2.00).

The AR support left participants enough control to determine the progress of the scenario (Mdn = 2.00, Q1 = 1.00, Q3 = 4.00). Even more, the interaction can be enriched with more objects and characters (Mdn = 2.00, Q1 = 1.00, Q3 = 3.00).

In addition, the participants appreciated that the provided AR environment can improve the communication in the team (Mdn = 1, Q3 = 2), and that they would like to use more AR scenarios for training purposes (Mdn = 1.50, Q1 = 1.00, Q3 = 2.00).

The participants appreciated the location (by three participants), the use of AR glasses (by one participant), the AR visualization (by five participants), the functional user interface, and the capability to make virtual annotations, as factors with positive contribution to the realism of the scenario.

More, the initial directions given to the team before each experiment session were considered good contributions to the preparation for a real task, with regard to the scenario.

The common operational picture, the information delivered, the remote support, the shared display, and the capability for the remote expert to visualize the local physical environment, were appreciated as good contributions to collaborate with colleagues.

The control, the common operational picture and the consistent visualization on the shared virtual workspace were seen as good contributions to the training of the communication skills.

Three participants judged the scenario as useful for the development of team situational awareness (aspect 4 in Table 1).

The AR equipment at the backpack and the cables, and the rather slow pace in movement (required for the RDSLAM visual-based tracking), had a negative impact for the mobility during the experiment, such an assembly impeding the application of the AR system in neither training sessions nor real practice (aspect 5a in Table 1).

The occasional information overload and hindrance, and the quality of the AR overlay (too dark) were factors with light negative impact during the experiment (aspects 5a and 5d in Table 1).

Forensic investigation

In case of the forensic investigation, the participants mentioned that the scenario was realistic enough (Mdn = 2.00, Q1 = 1.00, Q3 = 3.00), exciting and attractively built (Mdn = 1.50, Q1 = 1.00, Q3 = 2.00), had a clear objective (Mdn = 1.50, Q1 = 1.00, Q3 = 2.00), with clear instructions and explanations (Mdn = 1.50, Q1 = 1.00, Q3 = 2.00). More, the participants indicated that AR was easy during the scenario (Mdn = 1.00), with virtual information being easily recognizable (Mdn = 2.00, Q1 = 1.00, Q3 = 3.00), and that AR support left enough control to determine the progress of the scenario (Mdn = 2.00, Q1 = 1.00, Q3 = 3.00). The participants pointed out they would like to use more AR supported scenarios for training purposes (Mdn = 1.50, Q1 = 1.00, Q3 = 2.00).

The two participants found the location, the situation, the objects, the realistic blood and the blood pattern, the augmented reality capabilities, the communication support, as positive factors for the realism of the scenario. The overview and the feedback were considered good contributions to the preparation for a real task with regard to this scenario.

The augmented reality technology helped to increase the opportunities for consultation over the current operational situation, enabled overviews and brought clarity of the situation, facilitated the monitoring and the external control, all these factors being considered as positive for the collaboration with the colleagues.

The scenario facilitated the exchange of information within the team and that even more objects and scenarios can be considered to investigate by using AR technology (aspects in category 2 in Table 1).

The two participants also mentioned that the scenario helped them to build up a common ground regarding the situation (aspect 1 in Table 1).

They further considered the AR environment as suitable for enabling collaboration among distributed users.

The possibility to generate shared overviews, the distance between the physical locations of the local and the remote users and the collaboration functions, are perceived as good contributions to the training of communication skills. The participants appreciated that the augmented reality technology represents a new approach in communicating at the crime scene. More, the two participants indicated that the system used can be an appropriate tool for communication training.

During the experiment sessions on the forensic investigation scenario, there were few technical problems with the calibration procedure, this being mentioned (by one participant) as being a factor with negative impact on the realism of the scenario.

One problem was caused by the mask being worn over the mouth, which lead to fogging of the HMD (with negative implication on aspect 5a in Table 1).

Domestic violence

The participants interacting in this scenario indicated that the scenario was completely clear (Mdn = 2.00, Q1 = 1.00, Q3 = 5.00), and realistic enough (Mdn = 2.00, Q3 = 3.00), with clear instructions and explanations (Mdn = 2.00, Q1 = 1.00). The flow of actions and the orders given during the experiment related to important tasks for the participants’ daily work (Mdn = 2.00, Q1 = 1.00). More, the participants indicated the scenario prepares them well for future tasks (Mdn = 2.00, Q3 = 3.00), and that they would like to use more AR supported scenarios for training purposes (Mdn = 2.00, Q3 = 4.00). The participants specified also that the virtual information was easily recognizable (Mdn = 2.00, Q1 = 1.00) and displayed at the right time (Mdn = 2.00), that AR left enough control to determine the progress of the scenario (Mdn = 2.00, Q3 = 3.00). During the experiment sessions, there were no significant technical errors (Mdn = 1.00), and those which occurred were resolved quickly (Mdn = 1.00).

The contextual information such as on-the-spot person profiles and information about objects (aspect 4 in Table 1), the visibility and timing information in AR (aspect 3 and aspects in category 2 in Table 1) are perceived as very good aspects of the scenario (aspect 1 in Table 1). The information shared with regard to the dangerous civilians, the presence of the team leader in charge, the location and the realistic injuries and weapons, were considered good contributions to the realism of the scenario. The use of true information on the persons and objects, was indicated as good contribution to the preparation for a real task related to the scenario. One participant stated that the operation protocol and the team information sharing were considered positive factors for the collaboration with colleagues. These possibilities have a positive impact on the development of team situational awareness (aspect 4 in Table 1).

The restricted mobility for the local user was considered as a critical issue for the applicability of the AR in real operations (aspect 5a in Table 1).

Occasionally, the AR content was too present and hindered the focus on the current activity (aspect 5d in Table 1).

Discussion

The scenarios are well defined, exciting and attractively built, with clear objectives, and with clear instructions and explanations given at the beginning. The virtual information was easily recognizable and displayed at the right time. More, AR support during the scenario was easy and left enough control to determine the progress of the scenario.

According to the Kruskal–Wallis one-way analysis of variance, there is no statistically significant evidence of any difference on the participants’ perception of the three scenarios.

The participants appreciated the shared visualization (aspect 2a in Table 1), the communication (aspects in category 2 in Table 1), the directions of the remote user (aspect 3 in Table 1), the person profile pictures delivered on the spot and the provided information in AR to determine essential focus points, to increase the situational awareness (aspect 4 in Table 1) and to improve the common operational picture. In most cases, the virtual information was easily recognizable and displayed at the right time.

In summary, the ability of the AR system to add information to a real situation and to support collaboration among distributed users showed positive effects on communication and team situational awareness.

The experiments also illustrated shortcomings of the current technology (with negative impact on aspects from category 5 in Table 1). Some policemen experienced difficulties due to the temporary loss of visual tracking (aspect 5c in Table 1) which was caused by a very high pace of the tasks, and to an improper calibration for the marker-less tracking.

As the used tracking system [47] relies on a computer vision-based algorithm, the quality of the calibration and online tracking strongly depend on both the richness of visible patterns (for the calibration step) and the good illumination conditions in the physical environment. Occasional technical issues were noticed during the experiment for interacting within the AR environment in such conditions. The participants pointed out that some actions were slower than in real operations. They also identified the lower mobility of the local user, due to the wired connection to the backpack, as factor with negative impact on the realism and execution of the scenarios as well as significant implications for adopting such AR technology in real practice (aspect 5b in Table 1).

So far, the study indicates that the use of AR is valuable for information exchange and the development of team situational awareness in operational teams in the security domain. Most problems that occurred during the study are related to the current hardware characteristics of the mobile HMD device (aspect 5a in Table 1). We take into account the findings of the current usability study for planning the roadmap for future work, in preparing the next version of the AR environment.

The AR system used for the current research did not have capabilities to enable the local user to interact with the AR content. In line with the previous findings [15], it is beneficial for the local user to interact with the AR content, especially by free-hand gestures (using a depth sensing camera). This functionality will be considered as future work.

The results show that AR technology is beneficial for supporting the information exchange in teams from the security domain. Previous research [48] indicates that next to the situational awareness, the emotional awareness is also important, especially when team members collaborate remotely. Future work will also explore the affective computing technology to further improve the remote collaboration process in the security domain.

As HMD technology is constantly evolving, we expect that AR technology is close to be adopted for real operations.

Conclusions and future work

Operational teams from the security domain commonly exchange information orally. Even though such information exchange is routinely standardized, communication, especially under time pressure, still faces the risk of misunderstanding and misinterpretation by different team members. This risk has direct negative implications on the security of the operational teams as well as on the possibly affected civilians.

We explored the usability of AR-based technology to support information exchange in teams, with the goal of diminishing the incidental critical human mistakes in security operations.

The usability study showed that the scenarios are well defined. More, in line with the responses to the questionnaire, the Kruskal–Wallis one-way analysis of variance indicates no statistically significant relevance in the perception of the three scenarios, by the participants at the experiments.

The participants appreciated the shared visualization, the communication, the directions of the remote supervisor and the person profile pictures. In most cases, the virtual information delivered on the spot was easily recognizable and displayed at the right time.

The feedback of the participants indicated that the ability to add information to a real situation and to support collaboration among distributed users showed positive effects on communication.

Still, the study showed limitations of the AR technology, mainly because of the immobility of the system and its user.

The AR system used for the current research did not have capabilities for the local user to interact with the AR content. In future work, we will enable the interaction with the AR content for the local user via free-hand gestures (using a depth sensing camera). We will explore the integration of HMD hardware trackers (measuring orientation) to improve the tracking of the local user. Equally important, we will consider integrating truly mobile HMD equipment for the local officers, to address the major technical limitation towards the application of the AR technology in real practice, and to thereby facilitate information exchange in teams operating in the security domain. We will extend the AR support to several local users to study the effect on team situational awareness and team coordination. To further enhance team situational awareness, we will explore affective computing technology to support emotional awareness, to give the collaborating users feedback on, e.g. the current stress level, of the other users. Finally, we will develop the scenarios further towards mature simulation training games, to also be able to explore the usability of collaborative AR games in training.

References

  1. Flanagin AJ (2002) The impact of contemporary communication and information technologies on police organizations. In: Giles H (ed) Law enforcement, communication, and community. John Benjamins, Amsterdam, pp 85–105

    Chapter  Google Scholar 

  2. Hawkins D (2013) Law enforcement tech guide for communications interoperability. A guide for interagency communications projects. Revised and updated with contributions by Mallorie F. Teubner and Bonnie B. Maney, ISBN: ISBN 978-1-935676-55-3

  3. Leonard M, Graham S, Bonacum D (2004) The human factor: the critical importance of effective teamwork and communication in providing safe care. Qual Safe Health Care 13:i85–i90. doi:10.1136/qshc.2004.010033

    Article  Google Scholar 

  4. van Knippenberg D, De Dreu CKW, Homan AC (2004) Work group diversity and group performance: an integrative model and research agenda. J Appl Psychol 89(6):1008–1022

    Article  Google Scholar 

  5. Militello LG, Patterson ES, Bowman L, Wears RL (2007) Information flow during crisis management: challenges to coordination in the emergency operations center. Cogn Technol Work 9(1):25–31

    Article  Google Scholar 

  6. Gergle D, Kraut RE, Fussell SR (2013) Using visual information for grounding and awareness in collaborative tasks. Hum Comput Interact 28(1):1–39

    Google Scholar 

  7. Fussell SR, Kraut RE, Siegel J (2000) Coordination of communication: effects of shared visual context on collaborative work. In: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work. Philadelphia, Pennsylvania, USA: ACM. http://doi.org/10.1145/358916.358947, pp 21–30

  8. Fussell SR, Setlock LD, Kraut RE (2003) Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Ft. Lauderdale, Florida, USA: ACM. http://doi.org/10.1145/642611.642701, pp 513–520

  9. Kraut RE, Fussell SR, Siegel J (2003) Visual information as a conversational resource in collaborative physical tasks. Hum Comput Interact, vol 18(1). http://doi.org/10.1207/S15327051HCI1812_2, pp 13–49

  10. Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Fact J Hum Fact Ergon Soc 37:32–64

    Article  Google Scholar 

  11. Endsley MR, Robertson MM (2000) Situation awareness in aircraft maintenance teams. Int J Ind Ergon 26:301–325

    Article  Google Scholar 

  12. Azuma RT (1997) A survey of augmented reality. Presence: Teleoper Virtual Environ 6:355–385

  13. Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B (2001) Recent advances in augmented reality. Comput Graph Appl IEEE 21(6):34–47

    Article  Google Scholar 

  14. Milgram P, Kishino F (1994) A taxonomy of mixed reality visual display. IEICE Trans Inform Syst E77-D (12)

  15. Poelman R, Akman O, Lukosch S, Jonker P (2012) As if being there: mediated reality for crime scene investigation. In: CSCW’12: Proceedings of the 2012 ACM conference on Computer Supported Cooperative Work, pp 1267–1276

  16. de S Almeida I, Oikawa MA, Carres JP, Miyazaki J, Kato H, Billinghurst M (2012) AR-based video-mediated communication: a social presence enhancing experience. In: 14th Symposium on Virtual and Augmented Reality, SVR 2012, pp 125–130

  17. Huang W, Alem L, Tecchia F (2013) HandsIn3D: supporting remote guidance with immersive virtual environments. Hum Comput Interact INTERACT 2013 8117:70–77

    Article  Google Scholar 

  18. Datcu D, Cidota M, Lukosch H, Lukosch S (2014) Using augmented reality to support information exchange of teams in the security domain. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR’14), Munich, Germany

  19. Dourish P, Bellotti V (1992) Awareness and coordination in shared workspaces. In: Proceedings of the ACM 1992 Conference on Computer Supported Cooperative Work. Toronto, Canada, pp 107–114

  20. Artman H (2000) Team situation awareness and information distribution. Ergonomics 43(8):1111–1128

    Article  Google Scholar 

  21. Pascual RG, Mills MC, Blendell C (1999) Supporting distributed and ad-hoc team interaction. In: International Conference on Human Interfaces in Control Rooms, Cockpits and Command Centres. doi:10.1049/cp:19990164, pp. 64–71

  22. Stammers R, Hallam J (1985) Task allocation and the balancing of task demands in the multi-man-machine systems: some case studies. Appl Ergon 16:251–257

    Article  Google Scholar 

  23. Yufei S, Kazuo F (2002) Team situation awareness as a basis of team intention inference: definition and framework. In: Proceedings of the 41st SICE Annual Conference, vol. 3. doi:10.1109/SICE.2002.1196560, pp. 1641–1646

  24. Bowers CA, Jentsch F, Salas E, Braun CC (1998) Analyzing communication sequences for team training needs assessment. Hum Fact J Hum Fact Ergon Soc 40(4):672–679. doi:10.1518/001872098779649265

    Article  Google Scholar 

  25. Berlin JM, Carlström ED (2011) Why is collaboration minimised at the accident scene? Disaster Prevent Manag Int J 20(2):159–171. doi:10.1108/09653561111126094

    Article  Google Scholar 

  26. Smith PA, Baber C, Hunter J, Butler M (2008) Measuring team skills in crime scene investigation: exploring ad hoc teams. Ergonomics 51(10):1463–1488. doi:10.1080/00140130802248076

    Article  Google Scholar 

  27. Streefkerk JW, Wiering C, van Esch-Bussemakers M, Neerincx M (2008) Effects of presentation modality on team awareness and choice accuracy in a simulated police team task. Proc Hum Fact Ergon Soc Ann Meet. doi:10.1177/154193120805200438

    Google Scholar 

  28. Streefkerk JW, van Esch-Bussemakers M, Neerincx M (2009) Context-aware team task allocation to support mobile police surveillance. In: Schmorrow D, Estabrooke I, Grootjen M, editors. Foundations of augmented cognition. neuroergonomics and operational neuroscience, vol 5638. Berlin Heidelberg: Springer. doi:10.1007/978-3-642-02812-0_11, pp. 88–97

  29. Bharosa N, Lee J, Janssen M (2010) Challenges and obstacles in sharing and coordinating information during multi-agency disaster response: Propositions from field exercises. Inform Syst Front 12(1):49–65. doi:10.1007/s10796-009-9174-z

    Article  Google Scholar 

  30. Drabek TE, McEntire DA (2002) Emergent phenomena and multiorganizational coordination in disasters: lessons from the research literature. Int J Mass Emerg Disasters (IJMED) 20(2):197–224

    Google Scholar 

  31. Alem L, Tecchia F, Huang W (2011) Remote tele-assistance system for maintenance operators in mines

  32. Stansfield S (1996) Applications of virtual reality to nuclear safeguards and non-proliferation

  33. Stansfield S, Shawver D, Miner N, Rogers D (1995) An application of shared virtual reality to situational training. In: Virtual Reality Annual International Symposium, Proceedings, pp 156–161

  34. Nilsson S, Johansson B, Jönsson A (2009) Using AR to support cross-organisational collaboration in dynamic tasks. In: Mixed and augmented reality, 2009. ISMAR 2009. 8th IEEE International Symposium on, pp 3–12

  35. Gu N, Kim MJ, Maher ML (2011) Technological advancements in synchronous collaboration: the effect of 3D virtual worlds and tangible user interfaces on architectural design. Autom Construct 20(3):270–278

    Article  Google Scholar 

  36. Schnier C, Pitsch K, Dierker A, Hermann T (2011) Collaboration in augmented reality: How to establish coordination and joint attention? In: ECSCW 2011: Proceedings of the 12th European Conference on Computer Supported Cooperative Work, Aarhus, Denmark, pp 405–416

  37. Yabuki N, Furubayashi S, Hamada Y, Fukuda T (2012) Collaborative visualization of environmental simulation result and sensing data using augmented reality. In: Luo Y (ed) Cooperative Design, visualization, and engineering, vol 7467. Springer, Berlin, pp 227–230

    Chapter  Google Scholar 

  38. Datcu D, Lukosch S, Lukosch H (2013) Comparing presence, workload and situational awareness in a collaborative real world and augmented reality scenario. In: Proceedings of IEEE ISMAR workshop on Collaboration in Merging Realities (CiMeR)

  39. Datcu D, Lukosch H, Lukosch S (2014) A software system for studying the perception of presence in a collaborative real world and augmented reality scenario. In: The 17th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW 2014), Baltimore, Maryland, USA

  40. Harteveld C (2011) Triadic game design: balancing reality, Meaning and Play. Springer, Berlin

    Book  Google Scholar 

  41. Lukosch H, van Nuland B, van Ruijven T, van Veen L, Verbraeck A (2014) Building a virtual world for team work improvement. In: Frontiers in gaming simulation. Springer, pp 60–68

  42. Klabbers JHG (2006) The magic circle : principles of gaming and simulation. Sense Publishers, Rotterdam, The Netherlands, pp 141–166

  43. Bekebrede G (2010) Experiencing complexity: a gaming approach for understanding infrastructure systems. NGI Infra PhD thesis series on infrastructures., Delft University of Technology, Delft, The Netherlands

  44. Endsley MR (1988) Design and evaluation for situation awareness enhancement. In: Proc. of the Human Factors Society 32nd Annual Meeting, Santa Monica, CA, USA, pp 97–101

  45. Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B (2001) Recent advances in augmented reality. Comput Graph Appl IEEE 21(6):34–47

    Article  Google Scholar 

  46. Azuma RT (1997) A survey of augmented reality. Presence Teleoper Virtual Environ 6:355–385

    Article  Google Scholar 

  47. Tan W, Liu H, Dong Z, Zhang G, Bao H (2013) Robust monocular SLAM in dynamic environments. In: Mixed and Augmented Reality (ISMAR), 2013 IEEE International Symposium on, pp 209–218

  48. Datcu D (2014) On the enhancement of augmented reality-based tele-collaboration with affective computing technology, the romanian human-computer interaction conference (RoCHI’14), Constanta, Romania, September 3–4

Download references

Acknowledgements

The presented results have been achieved as part of a joint project with the Dutch Police and the Netherlands Forensic Institute (NFI) sponsored by the National Coordinator for Security and Counterterrorism (NCTV) of the Netherlands.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dragos Datcu.

Appendix 1

Appendix 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Datcu, D., Lukosch, S., Lukosch, H. et al. Using augmented reality for supporting information exchange in teams from the security domain. Secur Inform 4, 10 (2015). https://doi.org/10.1186/s13388-015-0025-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13388-015-0025-9

Keywords