EchoWear

Course Instructor

Pramod Gupta

Lead Team Member Affiliation

Computer Science

Second Team Member Affiliation

Computer Science

Third Team Member Affiliation

Computer Science

Abstract

EchoWear is both a mobile cross-platform application that allows users to communicate with their tools and platforms using voice commands, wake-word recognition, and personalized keywords thus enabling them to operate their devices fast and hands-free. The system is not tied to one ecosystem, as it is intended to operate on smartwatches and smartphones, so that the same interaction model can operate on other devices. EchoWear is mostly concerned with a fast activation, clear feedback, and simple configuration, and it also discusses how these characteristics can be used to serve people with disabilities, such as the deaf and hard-of-hearing, people with limited fine motor control.

The application has a special keyboard screen that allows one to see, search, and edit their own list of keywords. It is also possible to add new keywords in an Add New Keyword flow, where the user of the system will choose the phrases that the system will respond to. Another section of the profile uses settings to group connected devices, security and privacy, emergency contacts, notifications, and general information on the app. Its interface is a simple list-based design, clear text, and dependable icons and spacing that are accessible both to a phone and a watch screen. These design decisions make the interaction predictable and simple to learn, and hence users can easily locate what the system is configured and what keywords to be active.

Technically, EchoWear will hear certain wake-words, including Hey, Hello, and Excuse me. Once a wake-word is heard, the device will give a haptic response and log the event, so that one knows that the system is listening and is about to respond to an additional command. The speech recognition structures are applied to organize audio recording, wake up word recognition, keyword recognition, and real-time user interface update. This fundamental rationale is coded in such a way that it can be re-capacitated to common smartwatch platforms and reconfigured to other operating-systems enabling the assistant to accompany the user through multiple devices whilst maintaining the behavior.

The main driving force behind EchoWear is accessibility. Most assistants are based primarily on sound feedback and highly saturated visual interfaces; EchoWear, instead, focuses more on powerful, quick vibration feedback and plain-and-simple screens rather than sophisticated menus. Vibrations and identifiable indicators on the screen can be used to replace audio alerts and make the process of detecting the wake-word more visible to deaf or hard-of-hearing users. Voice-first interaction minimizes the use of accuracy during tapping and swiping in users with fewer fine motor control. Caregivers and users can also be assisted during the urgent situation when both phones and smartwatches allow quick access to the emergency contacts using the set keywords.

This project investigates practical challenges that can be found in voice-driven and multi-device applications such as recognition accuracy, latency, battery effects, microphone permissions, background noise, and false activations. EchoWear is a system that integrates speech recognition, haptic feedback, search of keywords and a small-sized settings interface into a single working system on both phones and watches. This advances the field of mobile and wearable computing by presenting an exemplary, practical design of a voice-first, accessibility-conscious assistant that can be used in everyday life and in disability-specific situations, yet be easy enough to be learnt and configured by non-technical users.

This document is currently not available here.

Share

COinS
 

EchoWear

EchoWear is both a mobile cross-platform application that allows users to communicate with their tools and platforms using voice commands, wake-word recognition, and personalized keywords thus enabling them to operate their devices fast and hands-free. The system is not tied to one ecosystem, as it is intended to operate on smartwatches and smartphones, so that the same interaction model can operate on other devices. EchoWear is mostly concerned with a fast activation, clear feedback, and simple configuration, and it also discusses how these characteristics can be used to serve people with disabilities, such as the deaf and hard-of-hearing, people with limited fine motor control.

The application has a special keyboard screen that allows one to see, search, and edit their own list of keywords. It is also possible to add new keywords in an Add New Keyword flow, where the user of the system will choose the phrases that the system will respond to. Another section of the profile uses settings to group connected devices, security and privacy, emergency contacts, notifications, and general information on the app. Its interface is a simple list-based design, clear text, and dependable icons and spacing that are accessible both to a phone and a watch screen. These design decisions make the interaction predictable and simple to learn, and hence users can easily locate what the system is configured and what keywords to be active.

Technically, EchoWear will hear certain wake-words, including Hey, Hello, and Excuse me. Once a wake-word is heard, the device will give a haptic response and log the event, so that one knows that the system is listening and is about to respond to an additional command. The speech recognition structures are applied to organize audio recording, wake up word recognition, keyword recognition, and real-time user interface update. This fundamental rationale is coded in such a way that it can be re-capacitated to common smartwatch platforms and reconfigured to other operating-systems enabling the assistant to accompany the user through multiple devices whilst maintaining the behavior.

The main driving force behind EchoWear is accessibility. Most assistants are based primarily on sound feedback and highly saturated visual interfaces; EchoWear, instead, focuses more on powerful, quick vibration feedback and plain-and-simple screens rather than sophisticated menus. Vibrations and identifiable indicators on the screen can be used to replace audio alerts and make the process of detecting the wake-word more visible to deaf or hard-of-hearing users. Voice-first interaction minimizes the use of accuracy during tapping and swiping in users with fewer fine motor control. Caregivers and users can also be assisted during the urgent situation when both phones and smartwatches allow quick access to the emergency contacts using the set keywords.

This project investigates practical challenges that can be found in voice-driven and multi-device applications such as recognition accuracy, latency, battery effects, microphone permissions, background noise, and false activations. EchoWear is a system that integrates speech recognition, haptic feedback, search of keywords and a small-sized settings interface into a single working system on both phones and watches. This advances the field of mobile and wearable computing by presenting an exemplary, practical design of a voice-first, accessibility-conscious assistant that can be used in everyday life and in disability-specific situations, yet be easy enough to be learnt and configured by non-technical users.