Thursday, January 27, 2011

The Design of Future Things, Chapter 3

Reference Information
The Design of Future Things
Donald A. Norman
Basic Books, New York, 2009



Summary
Machines currently fail to effectively communicate with humans, because they rely on signals such as loud, annoying sounds and flashing lights. These may be able to indicate that an error has occurred, but once there are many devices in one location all vying for attention, it will become impossible to tell what is going on. This is where natural interaction comes in, by moving the signals into the subconscious background where they are then permitted to communicate unobtrusively with the users. In the case of sound, this can be accomplished by transforming the signal into the ambient plane where the current state of the machine can be indicated by varying the ambient noise or making in louder and more obvious when something is starting to go wrong. By giving the user indication of eminent failure, the user can be prepared to take manually control which avoids the main pitfall of automation: the unexpected failure of the machine. The ambient sound should be provided by a naturally occurring component of the system, if possible. Another way of communicating to the human in a subtle way is by suggesting or tempting the user into choosing the preferred or safest setting by various means.


The author claims that humans know what to do when encountering a new object because they recognize features that the object contains that they have experienced before. He defines this as affordance, where the relationships between object and agent are enumerated. To take advantage of this intuition, devices should exhibit these features so that people may instantly identify what the machine's purpose is and how to use it, even without having to be told anything about it. To make the effect even more natural, the level of control exerted by intelligent machines should vary depending on the affordances of the human operator.


One way to control the machines would be through what the author terms "playbooks". The user can select a playbook that determines how much control the person has and how much the machine does, and allow the user to tweak exactly what the machine should do. In addition, the machine should show which playbook it is currently operating under while it is in full control so that the user knows what to expect and so he can change the playbook, if so desired. An intelligent machine should not, however, attempt to predict what a user might want since the user will then be left trying to predict what the machine might do. To alleviate this, the machine should be as predictable as possible and begin intelligent operation only after having been given instructions by the user.


Counter intuitively, the author suggests that danger may actually aid safety since people tend to take less risks when what they are doing is considered unsafe. This effect is called risk homeostasis, which may need to be implemented purposefully since modern technology has attempted to remove any feedback altogether.


Again, the author proposed his symbiotic relationship being the ideal form of human-machine interaction. This time he used the example of Cobots and Segways to show that machines can respond so effectively to human actions that the user may become oblivious to the fact that he is still using a machine.


Opinion
The most interesting points raised in this chapter were the ambient sounds from machines to indicate state and the apparent correlation between danger and risk taking. Like most people, sometimes I am already overwhelmed by the amount of information being generated and being annoyingly broadcasted to me by my devices. The idea that things can be made more dangerous to make them safer is intriguing, but I believe this can only be restricted to such things as transport since, as he mentioned in the chapter, few people want to purchase something that may begin to act unsafe of its own accord.







No comments:

Post a Comment