1. Introduction

1.1 Problem Statement

Emergency responders (firefighters, EMS crew, etc.) arrive on the scene of a motor vehicle accident involving a car that has rolled off an embankment and has passengers trapped inside. The responders know they will need to use various extrication tools including the “Jaws-of-Life,” but before they grab those tools they need to have access to valuable up-to-date information about the vehicle design and construction to find the most efficient and safest way to gain access to the vehicle to rescue the passengers. This critical information includes, but is not limited to, airbag locations, high pressure struts, and where to make safe extrication entry cuts. One wrong cut could lead to the inadvertent deployment of airbags that could cause further injury or even death of a passenger or an emergency responder. This contest is an opportunity for participants to create an innovative Augmented Reality (AR) mobile application that can provide this information in an intuitive, fast, and effective way to enable the extrication process.

There are existing applications and websites that provide emergency responders with diagrams of vehicles; however, these applications have various shortcomings and limitations: they may be cumbersome to locate the correct vehicle make and model, the orientation of the images do not match the pose of the vehicle in the accident, and the 2D images may be difficult to manipulate and sort through. This contest enables participants to address these limitations by using AR to overlay the critical vehicle information on top of a 3D picture/model and/or real-time image capture of the vehicle to help the emergency responder make the correct decisions during the extrication process. In addition to using AR, we anticipate participants will leverage external datasets, for example DMV records, to reliably identify the year, make and model of the vehicle.

1.2 Objectives

A mobile application which uses the device camera and vehicle year, make and model (entered manually, or read via a License Plate Reader and queried from DMV records or decoded from VIN) to overlay information about the vehicle, including safe extrication points, overlaid on the image. For the contest, the app should be able to process data from an external database; however, interacting with an actual database or creating a vehicle information database is not required. Participants should anticipate some sample datasets to be provided to assist them in their efforts.

The user should have the ability to control the amount and type of information displayed. For example, specific tips about the vehicle entry cut points (if available) or recommended tools to use can be displayed according to user preferences.

We envision the emergency responder walking around the vehicle pointing the device camera at the vehicle and, in real-time, the application delivers the relevant vehicle information overlaid on top of the image. The app should be able to detect the pose of the vehicle whether it be upside-down or on its side. In addition, the app should be able to identify key elements of the vehicle even in the event of massive damage that has caused significant deformities to the vehicle’s shape or appearance. Participants should not anticipate training data such as images of wreckages or damaged cars to be provided to assist them in their efforts.

The application should present information in different modes to provide the emergency responder options when real-time images are not feasible to display the necessary information; i.e. low light environments or where damage is too severe to rely on automatic object recognition. In these situations, there are several options that could be employed by the app: a 3D image of the vehicle could be used as the basis for the information overlays, audio instructions can be provided to emergency responders to tell them where they need to start the extrication process and what tools to use, or some other creative approach.

1.3 Resources

2. Evaluation criteria

Participants must adhere to the basic application requirements listed below. Failure to do so may result in non-grading of the application.

Criteria #0: Basic Requirements

Rating: Pass/Fail

  • Is the app capable of identifying the vehicle? (i.e., object detection, or the ability to identify the form, shape, and outlines of the car caught by the device’s camera) (Pass/Fail)
  • Is the app capable of identifying sides of the vehicle accurately? (Front, back, driver, and passenger sides)? (Pass/Fail)
  • Are the virtual objects (shapes and text) merged correctly with the real world? (Position, scale) (Pass/Fail)
  • Do the functions support the objectives of vehicle extrication process? (Pass/Fail)
    • The functions in the application must correspond to the vehicle extrication objectives. There should be no unnecessary actions and interaction.
  • Are the information and media content appropriate? (Pass/Fail)
    • The information and media content should appropriately assist the emergency responder in successfully completing the vehicle extrication process. The quality of the information used must be in accordance with the context of public safety communication best practices.
  • Loading time of virtual objects in the scene must be < 10 seconds (Pass/Fail)
  • Application should have the ability to function offline without connection to a data network (Pass/Fail)
  • Does the application meet the vehicle extrication contest fundamental objectives? (Pass/Fail)
    • The objective of use and potential of improving the efficiency of the vehicle extrication process must be clearly identifiable for the user.

All functions, assistance, and feedback must be comprehensible and should support the vehicle extrication process.

Criteria #1: Accuracy

Rating: 20/100

  • Augmented Reality Information Overlay for the user to visualize information:
    • How accurately the app displays the extrication info during use, such as exact location of the entry cut points
    • Are important internal components/structure of the vehicle around the entry points marked clearly (for example using a color-coding scheme)? This would include location of airbags, high pressure components, or high voltage parts in electric vehicles or other hazards

Criteria #2: Efficiency

Rating: 20/100

  • How efficient is the application in assisting emergency responders during the extrication process?
  • An imbedded frame rate counter will continuously display a frame rate count of 30-60 frames per second to enable ease of use for emergency responders
  • Loading time of virtual objects (measured in seconds)
    • Minimum 5 <= t < 10 seconds
    • Medium: 2 seconds <= t < 5 seconds
    • Maximum: t < 2 seconds
  • How well can the user use the application freely while walking around the vehicle?
    • The application will be in use during an operational event and will be handled directly by an emergency responder.

Criteria #3: Reliability

Rating: 20/100

  • Does the app provide an alternative method for displaying vehicle extrication info, if the live image is not suitable for use (ex: due to extensive damage to the car, or environmental factors such as lighting or weather conditions)?
    • 2D diagram of the vehicle with textual instructions — Minimum
    • 3D/360 interactive diagram of the vehicle — Maximum
  • Advanced features:
    • Animated instructions displaying helpful info such as suggested tools to use, angle of the entry point cuts, etc.
    • Voice over instructions

Criteria #4: Overall User Experience

Rating: 40/100

Likert scale styled assessment on the following items.

  • How well is the application designed?
    • The visual aspects of the application must be pleasing to the user and effective in communicating information to the user.
  • How evident is it that the developer leveraged public safety SMEs during the development process?
    • The application must be based on a user-centered process; the quality of the research methods and their subsequent integration are decisive for the success of the product.
  • Is the use sequence in the application intuitive to the user?
    • The goal is to minimize the learning time of the user to effectively use the application. The information presentation, symbols and the use of color must be self-explanatory and contribute to intuitive user guidance that is consistent with user expectations.
  • Is the solution universally intuitive and comprehensible?
    • A wide range of different users from varied disciplines in public safety and training must be able to securely operate the application in various operational scenarios; errors in use must be severely minimized with a target of completely eliminated.
  • What are the users’ overall sentiments about the flow of the application?

The sum of expectations, behaviors and reactions before, during and after use of the application must be predominantly positive to earn the trust of emergency responders and to encourage adoption.

3. EXPECTED DELIVERABLES FROM PARTICIPANTS

Review the How to Participate instructions in section 3 of this document to ensure that you are prepared for the in-person Regional Codeathon or Online Contest. The following deliverables will need to be included with the submission:

  • A completed submission form through techtoprotectchallenge.org
  • A 3-minute narrated PowerPoint file or 3-minute narrated video with the following sections:
    • A summary of the submission.
    • Basic information about the participant or participant’s team.
    • Specific requirements of the contest that are being addressed by the participants.
    • Two to four screenshots of the solutions or prototype.
    • Overview of key functions included in the submission.
  • Any available files or links to the developed solution or prototype.
  • Any additional files based on the contest description.
Tech To Protect
Rules & Guidelines