Jump to content

Reactive user interface

From Wikipedia, the free encyclopedia

A human-to-computer user interface is said to be "reactive" if it has the following characteristics:

  1. The user is immediately aware of the effect of each "gesture". Gestures can be keystrokes, mouse clicks, menu selections, or more esoteric inputs.
  2. The user is always aware of the state of their data.[1] Did I just save those changes? Did I just overwrite my backup by mistake? No data is hidden. In a figure-drawing program, the user can tell whether a line segment is composed of smaller segments.
  3. The user always knows how to get help. Help may be context-sensitive or modal, but it is substantial. A program with a built-in help browser is not reactive if its content is just a collection of screen shots or menu item labels with no real explanation of what they do.

Reactivity was a major goal in the early user interface research at MIT and Xerox PARC. A computer program which was not reactive would not be considered user friendly no matter how elaborate its presentation.[citation needed]

Early word-processing programs whose on-screen representations look nothing like their printer output could be reactive. The common example was WordStar on CP/M. On-screen, it looked like a markup language in a character cell display, but it had deep built-in help which was always available from an on-screen menu bar, and the effect of each keystroke was obvious.

References

[edit]
  1. ^ "Reactive UI with Dart and Flutter: Building Dynamic User Interfaces". Cloud Devs. 2023. Retrieved February 15, 2024.