Saturday, January 29, 2011

The Design of Future Things, Chapter 4

Reference Information
The Design of Future Things
Donald A. Norman
Basic Books, New York, 2009



Summary
In chapter four, entitled "Servants of Our Machines", the author emphasizes how machines have come to dominate our lives to such a degree that we spend much time keeping the machines happy instead of the other way around. He claims that we have become entirely dependent on machines for our well-being and on their ability to do harder work or more tedious jobs. The crux of the problem is that most people do not understand how the machine actually works, leaving them in a position where they must keep the machine working or the job will not get done. To make matters worse, the machines need constant maintenance and supervision so that their human masters in effect become the servants. The constant march of technology will not, in the author's opinion, help the matter at all since each new advance will solve known problems but introduce new and unforeseen problems.


Intelligent machines do a reasonable job at interpreting and interacting with the physical environment and other intelligent agents, but come up short in their attempts at human interaction. Machines are especially good at performing well-defined tasks where they do not have to interact with anything outside of a small environment. They can also serve effectively in industrial uses since the human operator is forced to learn all of the intricacies of the system, in effect changing the person to suit the machine. In locations where people might interact with machines without any special training, such as the home, the machines fall flat since they are incapable of effectively communicating. To alleviate the gulf that exists between humans and machines in communication, machines must be taught "social graces, superior communicative skills, and even emotions".


Designers must avoid the fault of over-automation without achieving full-automation. That is, devices that manage certain parts of an operation while leaving other parts for their human operators. This sort of design is dangerous since the system might cede control unexpectedly fully back to the user, who will inevitably will not be able to react effectively. The primary cause of this is human inattentiveness when the machine is doing a good job at automation since their input is unnecessary. This is because a person's situational awareness and vigilance degrade as long as the machine continues to operate well without human intervention. Ideally, the author argues, automation should be all or nothing: either full automation or no automation at all.


The chapter also introduced artificial swarms and platoons. The idea of a swarm is to fit many agents in a small space and have them all go in a general direction while each makes sure that he does not collide with any others. These systems would allow for many more vehicles to be able to drive on the same roads since the swarms can pack the vehicles much closer. This is achieved by having all members of a swarm communicate which direction is desired. Platoons, on the other hand, work by having everyone follow a leader and would be much simpler to implement.


Opinion
By far the most interesting idea presented in this chapter is that of swarms, particularly of vehicles. If such a system were to come to fruition, transportation could become enormously efficient. About becoming slaves to machines, I agree and disagree with the author. The author wants full automation, where the human gives almost no input and the machines basically take care of everything. This is fine for industrial environments, but at home I would rather have much more control over my environment instead of ceding every decision to machines.




No comments:

Post a Comment