You are here

Interaction models in a multi-system, mobile computing environment

30 March, 2015 - 11:00

When the inventors of the Xerox Alto adopted the GUI/WIMP interaction model, they revolutionized computing. The model was a perfect fit for the type of computing system that they visualized and for the first generation of personal computing to come. It is, however, unlikely that they could have imagined the breadth of coming systems that exist now and are anticipated in the future. New interaction models as revolutionary as the GUI/WIMP was at that time need to be identified.

User interfaces of the future are being driven by a number of key advancements in computing technology.

Key advances in computing driving user interface design

The following factors are changing the way user interfaces are designed:

  1. Computing power  - Vast amounts of computing resources are now available to users either on readily available devices (e.g. smart telephones) or in the cloud (Unit 2). Processing demands that used to require execution on the server-side can now be done on the client-side. Innovative UIs must support adaptable access to these resources.
  2. Connectivity  - Access to the Internet is now readily available at low cost. Wireless and high-speed broadband connections are the norm rather than the exception. UIs must provide efficient access to resources in increasingly diverse environments and situations.
  3. Device proliferation  - More and more varieties of devices provide access to the Internet and computing resources – mobile telephones, game systems, music players, televisions, GPS devices, etc. Each of these devices supports unique interface elements that should be supported.
  4. Internet standards  - The adoption of standards amongst service providers and device manufacturers should allow the development of interoperable and compatible UIs. XML-based integration using such standards as XSL and XSLT support the easy re-purposing of general data/resources into device-dependent formats.
  5. User interface research  - The multidisciplinary nature of HCI has led to the development of new interaction models and techniques that are adaptable to the functionalities of new technologies. For example, gestural interfaces will be discussed later in this unit.

The simplest of these new UI styles can be categorized as post-WIMP interactions.

New UI styles

New UI styles, including the following, are currently being adopted in some applications:

  • Marking menus  - Circular menus that appear directly under the cursor; excellent examples of the use of marking menus can be found here.
  • Droppable tools  - User tools that can be 'dropped' anyplace in the user workspace and 'grabbed'/retrieved later; an example of how to use and implement droppable objects can be found here.
  • Graspable interfaces  - Using physical objects as input to manipulate virtual objects; providing graspable objects has become a popular feature in e-commerce websites; the goal of such interfaces can be viewed as an attempt to compensate for the fact that online customers do not have physical access to products. An example of such an interface can be found here.
  • Dynamic queries  - Updating data quickly as a means to filter data in and out of the user's view; introduced in 2010, Google Instant attempts to bring Google users this level of functionality. Many Google searchers appeared to readily accept this new feature; others found it to be intrusive. Pivot is another application can filter and present data in new and potentially powerful ways.
  • Zoomable user interfaces  - Provides user navigation through the interface quickly and intuitively. Websites incorporating data from Google Maps often do so in order to provide their visitors with the ability to zoom and pan through the site interface. For example, visitors to the OUHK are able to quickly and easily look through the area surrounding the university.

The following video analyses advances in UIs, and begins looking beyond current UI features and metaphors.


If these current styles and examples are any indication, future interfaces will:

  • emphasize direct manipulation;
  • move away from the use of metaphors (e.g. icons, desktops, etc.); and
  • build upon human and social interaction.

Direct manipulation

Direct manipulation is a powerful concept in the design of UIs. Its goal is to make operations more natural and thereby increase the transparency of the interface. This is accomplished by means of providing the user a continuous representation of an object of interest. This object can then be manipulated by the user by means of physical actions or labeled button presses instead of with commands that are likely to have complex and ambiguous syntax. Actions on an object are rapid, incremental, reversible operations whose impact on the object of interest is immediately visible. The typical GUI/WIMP operation of drag-and-drop is an earlier implementation of the direct manipulation concept.

As mentioned earlier, numerous UIs currently take advantage of the power of zoomable interfaces. These interfaces typically provide users with the display of an infinite flat surface that can be viewed at any resolution. Pan and zoom interactions allow the user to determine dynamically which objects present in the interface are visible or not and to what level of detail/scale. This interaction model can readily be adapted to give the user a sense of immersion within the interface.

The following video discusses this move toward direct manipulation and toward immersion.


Second Life (SL) is a virtual world (computer-based simulated environment) that is accessible on the Internet. A free client program called the Viewer enables its users, called residents, to interact with each other through avatars. Residents can explore, meet other residents, socialize, participate in individual and group activities, and create and trade virtual property and services with one another, or travel throughout the world. You can see a scene from Second Life here.

Interaction in Second Life

As you can see in the screen capture above, Second Life provides its users with a number of post-WIMP interfaces. The avatar is capable of communicating with the environment via gestures (either controlled by key presses, joysticks, mouse, etc.) to perform such operations as opening doors, sitting, etc. Objects within the environment (e.g. clothing) can be directly manipulated. Spatial orientation within the environment can be controlled via a customized menu. Data within the environment (e.g. avatar name, 'seat,' etc.) are dynamically created and displayed when relevant. The image above shows an example of a marking menu previously described as a post-WIMP interaction element. The Second Life Viewer client is especially designed to support these interaction capabilities since they cannot be provided in a Web environment with standard HTML elements. Second Life does plan to support Web browser functionality in the near future.