Verwendung von Hover Detection zur Verbesserung der Texteingabe auf Smartphones
|Other Titles:||Using Hover Detection To Improve Text Entry On Smartphones||Authors:||Pollmann, Frederic||Supervisor:||Malaka, Rainer||1. Expert:||Malaka, Rainer||2. Expert:||Frese, Udo||Abstract:||
Interaction with smartphones can be challenging for some users, especially with regards to text entry. In these handheld devices the available screen space limits the size of the user interfaces elements. This problem is excerbated when a lot of UI elements has to be displayed simultaneously, for example in a on-screen keyboard. Especially users with limitations like decreased vision or motor control can have a hard time using these devices, effectively excluding them from a part of modern social life. In this work we evaluated if hover detection can be used to improve usability for text entry on a smartphone. In several experiments the position of the hovering finger was used to selectively enlarge the UI, to provide visual location feedback on the keyboard or to offer audio assistance. When testing with elderly users, the visual feedback was positively received. Unfortunately the comparatively high latency of the hover detection (about 250 ms) negated any gains in usability. This result was confirmed in tests with young users, who also did not benefit from the hover detection. Most usability gains for elderly users were made by introducing a keyboard layout with larger keys which stayed at that size, regardless of hover position. Visually impaired users liked the idea of a context sensitive magnification as well, but hover detection was not really usable to its inherent lack of haptic feedback. Acoustic feedback did not produce a better user experience for the same reason. Reliable use of hover detection was just not possible without adequate levels of vision. This research showed that assistive technologies on smartphones like selective magnification of the user interface can help users, but only when technical parameters are sufficient for the input process. In this case hover detection allowed us to implement visual, haptic and audio feedback based on the hover position of the finger as a proof of concept. Unfortunately high latency only allowed us to show qualitative improvement, not quantitive. Further improvements in hover detection hardware may make this research relevant again, though.
|Keywords:||Human-computer interaction; mobile interaction; text entry; hover detection; airview||Issue Date:||16-Oct-2017||Type:||Dissertation||URN:||urn:nbn:de:gbv:46-00106137-15||Institution:||Universität Bremen||Faculty:||FB3 Mathematik/Informatik|
|Appears in Collections:||Dissertationen|
checked on Jan 19, 2021
checked on Jan 19, 2021
Items in Media are protected by copyright, with all rights reserved, unless otherwise indicated.