There is, in fact, almost no formal theory dealing with analogue communication and, in particular, no equivalent of Information Theory or Logical Type Theory. This gap in formal knowledge is inconvenient when we leave the rarified world of logic and mathematics and come face to face with the phenomena of natural history. In the natural world, communication is rarely either purely digital or purely analogic.1
This is Gregory Bateson’s words. I consider that user interface has the same problem. Computer is digital. Computer is controlled by formal theory. However, We, human, is not purely digital. We, are analogic rather than digital. Therefore, user interface should be the place where purely digital computer becomes analogic one, and analogic human becomes digital one in order to communicate each other.
Display Acts
When we use a computer, what do we do? Almost all of us look at some images on an electric display, grab and move a mouse, and type on a keyboard, then our right hand holds the mouse in order to point to an image called an icon on the display. This is very 'natural' for us. If our body makes some actions with a plastic object, then the images on the electric display change. However, this relationship between our body and the image did not exist until the computer, and especially until the Graphical User Interface (GUI), appeared. I call this phenomenon 'Display Acts': action which is formed by connecting our body action with the change of images on the electric display.2 However, we do not understand this action well because we have only separately studied body actions and images in man-computer communication.
In order to make clear the relationship between our body action and images in the man-computer world, this presentation focuses on the cursor in the electric display and the mouse in the real world. We see the cursor in the electric display: The cursor is "a small mark on a computer screen that can be moved and that shows the position on the screen, where, for example, text will be added."3 Moreover, the shape of cursor is almost arrow (→) which is "used to show direction or position."4 We just see the arrow cursor in order to know where it is and point some images on the display. Then, we also hold the mouse.
Exemplification
What does the cursor show us? It shows us the position and direction, however, the cursor often points nothing. When the arrow cursor points nothing, it does not work. We don't care this. Why? Because we not only see the images in the display, but also grabbing the mouse. What is a relationship between seeing and grabbing? In order to make clear this relationship between the cursor and mouse, I refer to Nelson Goodman’s idea; Exemplification. According to Goodman;
Exemplification, though one of the most frequent and important functions of works of art, is the least noticed and understood. Not only some troubles about style but many futile debates over the symbolic character of art can be blamed on ignoring the lessons, readily learned from everyday cases of the relation of being-a-sample-of, that mere possession of a property does not amount to exemplification, that exemplification involves reference by what possesses to the property possessed, and thus that exemplification though obviously different from denotation (or description or representation) is no less a species of reference.5
Possession means that some samples show us their own properties, and we interpret their properties for our interests. The user interface is not work of art, but I consider that the images in the electrical images with the mouse or other input devices show us almost same functions of work of art.
We see the shape of cursor as a sample in order to know functions of cursor. We get a pointing function from the arrow shape, therefore, we label this pointing function to the arrow cursor. However, the arrow cursor points nothing. As a result, we think this arrow possess another functions; Maybe it is moving for pointing because the arrow is shape for pointing. Only when we see the electric display, we can understand that the functions of the cursor is only pointing. However, user interface has the mouse. We touch and grab the mouse, then we see that the cursor moves and point the image. This touching is another function of the cursor because the cursor is designed for connecting with the mouse in order to work in the display. Only when we see the display, we can just know pointing and moving functions of the cursor (fig. 1). Only when we touch the mouse, we can understand the cursor is able to touch the image and change images in the display (fig. 2). Only pointing can not change anything in the display. Changing needs touching. In short, we not only see the images on the display, but also grabbing the mouse when we use the computer. Therefore, we have to consider what a relationship between seeing and touching is.
Moreover, the arrow cursor sometimes suddenly changes into the spinning wait cursor or something else. When we see transformations of the cursor's shape, the mouse we grasp does not change. However, the moment the arrow cursor changes into the spinning wait cursor, our action with the mouse changes. Before transformation, we use the arrow cursor and the mouse in order to point to an image on the display, and therefore the cursor exemplifies the function of pointing and our body action with the mouse is formed for pointing. After transformation, the spinning wait cursor exemplifies the function of, not pointing, but showing the computer status. As a result, our action with the mouse is not formed for pointing. After a moment, we realize that what we can do with the mouse in hand is just moving the cursor in the electric display. The spinning wait cursor exemplifies that the computer is still working, but it does not leave us alone; the cursor and the mouse are connecting, please wait. Throughout the whole process, we grasp the same plastic object, even though the cursor's shape is changing.
Double Description
In user interface, the image we see is changing while the object we touch is not changing. Moreover, this transformation of image is sometimes out of our control. But we naturally accept this phenomena. In order to make it clear, I refer Bateson’s Double Description. He quotes Shakespeare’s Macbeth and says that:
This literary example will serve for all those cases of double description in which data from two or more different senses are combined. Macbeth “proves” that the dagger is only an hallucination by checking with his sense of touch, but even that is not enough. Perhaps his eyes are “worth all the rest.” It is only when “gouts of blood” appear on the hallucinated dagger that he can dismiss the whole matter: “There’s no such thing.” 6
We need Double Description of seeing and touching in order to consider and prove what the user interface is. However, user interface mainly focuses on seeing. For example, What You See Is What You Get; WYSIWYG. From Wikipedia, the main attraction of WYSIWYG is the ability of the user to be visualize what he or she is producing.7 Here, ‘to be visualize’ is emphasized. One more example. Ben Shneiderman proposed a very important idea for user interface: Direct Manipulation. From Shneiderman, Direct Manipulation’s central idea is to be visibility of object of interest.8 Here, only visibility is emphasized.
I want to focus on Direct Manipulation from anther point of view. According to George Lakoff and Mark Johnson. Direct Manipulation leads us not only seeing but also touching because we see and touch the object in order to create something. And they say that the Direct Manipulation makes the concept of CAUSATION.9 Furthermore, Søren Pold connects GUI with the concept of CAUSATION. He says that WYSIWYG interface has many buttons, and buttons are essential part of controls in GUI10 and also continues that:
When pushing a button in a interface ─ that is, by movement of the mouse, directing the representation of one’s hand onto the representation of a button in the interface and activating a script by clicking or double-clicking ─ we somehow know we are in fact manipulating several layers of symbolic representation and,…… It is a software simulation of a function, and this simulation aims to hide its mediated character and acts as if the function were natural or mechanical in a straight cause-and-effect relation.11
According to Pold, pushing button in the user interface makes a straight cause-and-effect relation in the electric display, although this cause-and effect appears by hiding manipulating several layers of symbolic representation.
Another view of the cursor and mouse. A media artist, Masaki Fujihata says computer does not have a question of materiality.12 This means that computer has different principle from us because we live in the material world. Fujihata considers that if the repeated experience are provided us, through interactive experience, Images are made into object.13 According to artist’s intuition, we go through new phenomena: Image are made into object.
What You See Is What You Touch?
However, this ‘object’ in the display is not real object. According to Pold, this ‘object’ in the display shows us cause-and-effect, therefore, we feel as if we directly manipulate it by our own hands. On the other side, Fujihata says that the ‘object’ does not have its own materiality. We, of course, see the ‘object’ but can we touch it? Maybe, yes. Maybe, no. I consider that this ‘object’ is something like “switch” as Bateson defined;
We do not notice that the concept “switch” is of quite a different order from the concepts “stone,” “table,” and the like. Closer examination shows that the switch, considered as a part of an electric circuit, does not exist when it is in the on position. From the point of view of the circuit, it is not different from the conducting wire which leads to it and the wire which leads away from it. It is merely “more conductor.” Conversely, but similarly when the switch is off, it does not exist from the point of view of the circuit. It is nothing, a gap between two conductors which, themselves exist only as conductors when the switch is on. In other words, the switch is not except at the moments of its change of setting, and the concept “switch” has thus a special relation to time. It is related to the notion “change” rather than to the notion “object.” 14
Like the concept ‘switch’, the ‘object’ is related to the notion “change” rather than to the notion “object”. The cursor is given a special relation to time by the mouse.
Why is this happen? Because the computer is purely logic machine which is made by analogic human in cause-and-effect world; “the if ... then of logic in the syllogism is very different from the if ... then of cause and effect.” 15 Bateson makes a question: Can logic simulate all sequences of cause-and-effect? His answer is maybe no.16 The reason is that “the if ... then of causality contains time, but the if ... then of logic is timeless. It follows that logic is an incomplete model of causality.”17 However, we want to re-made the computer as cause-and-effect machine via Double Description of seeing and touching. Introducing the special relation to time with the mouse, user interface makes new images with coupling cause-and-effect and logic in the electric display. We are making pseudo-cause-and-effect with logic in the user interface. Therefore, we touch the same plastic object, even though the ‘object’ in the display is changing.
fig.3
Returning to the cursor and the mouse. The cursor looks like showing cause-and-effect, however, it stands for the logic in the computer: there is no time. But, we live in time: cause-and-effect world. Pold tells us that the user interface make us believe there is a straight cause-and-effect relation, even though this cause-and effect appears by hiding manipulating several layers of symbolic representation. On the other hand, Fujihata says that if we see cause-and-effect behind the image, then we can touch it. Almost all us do not understand what the logic in the computer is, but we use it with the display, the keyboard, the mouse, and something else. The reason is that we believe that the computer is not only logic, nor only cause-and-effect, but just ‘something’. Therefore, when we consider about user interface, we should not confuse ‘if ... then of logic’ with ‘if ... then of cause-and-effect’. We see the change of image in the electric display; it shows us the logic in the computer but we feel something like cause-and-effect, which is made from computer’s pure logic. We touch the same object while the image is changing: our grasping mouse introduces the special relation of time to the computer and makes pseudo-cause-and-effect in the computer (fig. 3). Therefore, only seeing is not enough and only touching is neither. In order to made clear ‘something’; the pseudo-cause-and-effect with logic, we have to consider the double description of seeing and touching in the user interface. In short, the computer has changed the relationship between our seeing and touching. It means that the computer changes our notion and experience for touching object: Touching not “object” but “change”.
References
1 Gregory Bateson, “Steps to an Ecology of Mind”, The University of Chicago Press, 2000, p.291.
2 Masanori Mizuno, “The formative process of “Display Acts” on the establishment of GUI”, Doctor Thesis,2009, p.3.
3 Oxford Advanced Learner’s Dictionary (electric version), Oxford University Press, 2000
4 Ibid.
5 Nelson Goodman, “Way of Worldmaking”, Hackett Publishing Company, 1978, p.32.
6 Gregory Bateson, “Mind and Nature”, Wildwood House, 1979, p.73.
7 Wikipedia, http://en.wikipedia.org/wiki/WYSIWYG (9, April 2009 access).
8 Ben Shneiderman, ‘Direct Manipulation: A Step Beyond Programming Languages’ in Noah Wardrip-Fruin
and Nick Montfort ed., “The New Media Reader”, MIT Press, 2003, p.486.
9 George Lakoff & Mark Johnson, “Metaphor We Live By”, The University of Chicago Press, 2003, pp.75-76.
10 Søren Pold, ‘Button’ in Matthew Fuller ed., “Software Studies: a lexicon”, MIT Press, 2008, p.31.
11 ibid., p.33.
12 Masaki Fujihata, “Masaki Fujihata: The Conquest of Imperfection ─ New Realities Created with Images and Media”, Center for Contemporary Graphic Art, 2007, p.178.
13 ibid., p.178.
14 Bateson (1979), pp.108-109.
15 ibid., p.58.
16 ibid., p.58.
17 ibid., p.59.
Origina text: Proceedings of 4th International Symposium on Multi-Sensory Design
No comments:
Post a Comment