BF 2

Werbung:

Dirty

New Member
Recently there has been an explosive expansion in mobile computing and an increasing availability of cheap sensors to detect elements of the user's current context, e.g. their location and the air temperature. As a result there has been ever-increasing interest in context-aware applications: applications whose behavior [[to some degree]] is governed by the user's current context. Like many emerging technologies, context-awareness has attracted a wide spectrum of claims as to its likely future impact. As an example on the positive side, the annual report of Hewlett-Packard to shareholders holds context-awareness to be a key future technology for the company, thus showing that business executives as well as researchers are now taking notice. As a further example, some staff at Reuters see the use of context as the most promising way of tackling the increasing problems of information overload. As additional positive evidence, there are plenty of successful applications just over the border of what are considered to be true context-aware applications: these include GPS applications in farming, in construction, in the military, and in `p-commerce' (commerce based on the user's location) in general. On the negative side, pessimists say that, to bridge the gap between laboratory and marketplace, context-awareness needs a killer application, and they see no sign of one.

The purpose of this paper is to look at potential context-aware applications. The applications may not be universally accepted as killer ones, but, even if not, they hopefully offer some credibility. We try to describe applications in generic terms, though we give specific examples for the purpose of explanation. We also try to take a catholic view of the purpose of an application: it may range from an economic one of improving communications in an office to a personal one of enriching family life. The paper presents six types of context-aware application. We emphasize that it is not our aim to produce a taxonomy: there are plenty of applications that do not fit any of the generic applications we discuss.

There will be no pure context-aware applications, since context-awareness on its own is not something a user needs. Instead many see context-awareness as an enabling technology to help other applications perform better. Thus in the applications that we describe context-awareness may be a relatively small part of the whole application, though it will be a essential part.

Past work
Numerous context-aware projects have been undertaken, albeit mostly with the label `prototype' or `pilot'. We would like to highlight one of these, because it has a generic quality. This is the work at Xerox PARC in the early nineties, described in a survey article by Schilit et al [1]. The article has since become a seminal paper for researchers in the field. The article presented the earliest taxonomy of context-aware applications, and the Xerox researchers produced working prototypes of applications in each class of their taxonomy; these prototypes used the PARCTab, a device that combines the properties of active badge and PDA. The taxonomy consists of four classes. The first is proximate selection, which is concerned with automatically changing interfaces so that the natural defaults reflect the user's current context. The second is automatic reconfiguration according to context. The third and fourth are concerned with context information/commands and with context-triggered actions, respectively; a majority of present-day applications come within these two classes. The four classes are centred on the form of the user interface of the application, rather the way context is used.

Arguably this work was ahead of its time, especially as the hardware then available was bulky, expensive and far from ubiquitous.

Capturing and maintaining contexts
Our six types of application, like others, require a basic infrastructure for maintaining and manipulating contexts. There are two elements to this: (a) the capture of the current context and (b) context memory.

The current context is typically captured partly from sensors, partly from existing information (diaries, todo lists, weather forecasts, share price feeds), partly perhaps from user models and task models, partly from the state of the user's computing equipment and the user's interaction with that equipment, and partly from explicit settings by the user. As an example of the last of these the user may set their current interests/needs/state (`I want a restaurant', `I am on holiday', `I want to know about historical buildings'). The current context may combine the physical and the virtual, and may switch contextual fields between the two: for example the user might want to pretend sometimes that they are at a different time and place from their physical one.

Sensors tend to be at a low level; for example sensors in an office may detect whether doors are open or chairs are in use. Humans are interested in higher-level contextual elements, such as whether people are busy or whether a meeting is taking place. Past work on synthesizing high-level events from low-level ones, such as Pepys [2], may well find increasing application in the future. As this synthesis becomes cleverer at detecting the user's context, the amount the user will need to tell the system will decrease. For example the synthesis might be able to deduce from a number of low-level details that the user is on holiday. Past experience, however, shows that such deductions can often be too clever by half, and we will need to find out where the practical boundaries lie.

Synthesis is also important in making probabilistic judgements based on sensor readings. This could apply when several alternative sensors relate to the same physical quantity: location may potentially come from GPS, from the user's cellphone and from image recognition of camera pictures. Any of these can give unreliable readings and any can be non-operational, but the application needs to be provided with the most probable location based on the data derived from the three types of sensor.

As a final comment on synthesis, the synthesized context may itself be valuable information that can be presented to the user: this can apply especially when context is rapidly changing as in emergencies, battlefield situations and financial upheavals.

The second fundamental requirement, in addition to capturing and representing the current context, is what we call context memory. (This may also be called `history' or `logging', but we prefer the term `memory' as it implies a degree of organization and intelligent recall.) At the lowest level context memory is important in applications where the context values are not per se important, but change is important, e.g. someone moving. To detect change, the application obviously has to memorize past values. At a higher level, context memory goes hand in hand with synthesis: analyzing past contexts and in particular analyzing what is changing and what is not, can lead to intuitions (`The user has been viewing web pages about old buildings for the past then minutes, therefore ... ', `The user has been composing messages to work colleagues, therefore ... ').

For many applications context memory needs to straddle the past, the present and the future. Some contextual events, at the time they are captured, relate to intended future events (e.g. diary entries, todo lists, forecasts). Normally such events are at the human task level rather than that of low-level sensors, and are thus more valuable. As time proceeds, future events become past events in the context memory, especially if other contextual readings at the time of the event indicate that the event really did happen.

We have discussed the topic of recording context at some length because, as has been widely observed, large amounts of infrastructure covering context will be needed before the potential applications can realize their full potential. For more discussion of some of the issues see Salber et al [3].
 
Werbung:
Top