Saturday, November 13, 2010

The ‘thinness’ is given to the plane: A study of the ‘plane’ in Masaki Fujihata’s works

This article examines what the ‘plane’ in Masaki Fujihata’s works is. Although Fujihata is known as one of the most famous media artists, the work Unformed Symbols is not that well known—just an animation work which Fujihata started his artistic career from. In making this, and other works—i.e., the ‘sculpture,’ Forbidden Fruits and interactive art works like Beyond Pages—however, he discovered, for himself, the possibility of computer graphics, and, as I explore in this paper, came to tackle the problem of the plane with, for perhaps the first time, the computer.

I consider three of Fujihata’s works in order to consider this handling of the plane as it exists in his works. First, I compare the plane in Forbidden Fruits with Leo Steinberg’s the flatbed picture plane. This consideration makes clear that the plane is no longer the privileged role for the image in a collection of data. Secondly, I make a comparison between the interactive work Beyond Pages and the Graphical User Interface in order to show that the plane in the computer, through both artwork and utilitarian feature, becomes too thin to grasp with our hands. Thirdly, I ponder why the animation Unformed Symbols overlaps the image with the real, showing that there is no difference between the plane and the solid in this ‘thin’ world. Accordingly, I conclude that Fujihata may have created a new plane itself by creating a ‘thinness’ which causes a ‘switchover between dimensions’ to that of the plane.

Incidentally, the architect Junya Ishigami’s Table, which has a very thin tabletop shows some similarities to Fujihata’s ‘thin’ plane. And furthermore, in his architectural critique, Taro Igarashi refers to the tabletop of Table as Superflat. Thus, I finally point out that Fujihata’s ‘thin’ plane shares a homology with Superflat, which, as proposed by the artist Takashi Murakami and developed into the discussion about information by the philosopher Hiroki Azuma, has come to be fundamental concept for modern Japanese art, and also suggest this ‘switchover between dimensions’.

Sunday, September 12, 2010

'In-between,' 'hover,' 'unfixed.'

'In-between,' 'hover,' 'unfixed.' These words remind me the cursor on the display. I took up them from the book about "Window|Interface." We often consider about what the window is, but we have to consider about what the cursor is. I think that the cursor is the representation of our age, especially latest 30years,1980-2010.

Saturday, September 11, 2010

The absolute thin electric device



I'm interested in the absolute thin for the electric device, especially touch interface one like iPhone. What is the thin for the interface? The thin is not the plane but the object. For example, even though the card is the plane when we look at it, it changes to the object when we touch it. Can we decide the point between the plane and the object? The electrical device becomes thinner and thinner, at last it will get absolute thinness. What will we feel this absolute thin electrical device? Is it the plane or the object? Is it only the display or not? Is it the only image or not?

Wednesday, August 25, 2010

First draft: The cursor is the switchover entity between the real and the virtual: Two works about the cursor by exonemo

This article examines the representation of the cursor in the Graphical User Interface. We often look the cursor as an expanding of our hand or the double of the pointing device like the mouse. Although the cursor on the display has been familier with us since the desktop metaphor, ‘↑’ has no correspondance with the real thing, such as the file or folder. Moreover, the cursor does not obey the physical law because it is in the cyber space. Therefore, we are familier with the cursor, but we don’t know anything about ‘↑’.

exonemo receives the unidentifiable of the cursor and considers the cursor as a halfway being: This means that the cursor is A, also it is B. This idea leads us to new understand of the cursor, therefore I focus on their two works, DanmatsuMouse (2007) and (2010) in order to explain what an essence of the cursor is. These two works make clear that the cursor is the switchover entity which makes the ‘between’ the real world and the virtual world, then switches from the real to the virtual, or vice versa.

First draft:A ‘plane’ on Masaki Fujihata’s works: Gravity/Projection/Overlapping

Masaki Fujihata is know as one the famous media artist. Although almost all people think that Fujihata’s work is interactive, Unformed Symbols (2006) is not an interactive work, just an animation one. Fujihata’s early work Forbidden Fruites is a ‘sculpture’, it is not the interactive, either. I would like focus on not an ‘interactive’ aspect of Fujihata’s work but a ‘plane‘ one because he often refer the planeness about his works.

Fujihata discover an unique field in the computer where there is no difference between the plane and solid. Forbidden Fruites are made by the Stereolithography which makes a solid from pilling many planes up. Fujihata used this technique in order to directly take out a shape of data form the computer. In Forbidden Fruites, the planes become the solid. I will compare the plane in Forbidden Fruits with Leo Steinberg’s the flatbed picture plane in order to make clear the meaning of the plane in the data.

Beyond Pages is the most famous in Fujihata’s work because it is the interactive one. Although Fujihata points out that Beyond Pages propose new semiotic issues, I consider that this work also give a question; What is the plane in Graphical User Interface age. GUI shows a new two-dimensional surface due to that it is not two-dimensional but looks two-dimensional. Beyond Pages projects the image of pages onto the table, then we regard it as the book. However we can not grab the book on the table because it too thin to catch it. There is overlapping very thin image plane on the desktop. Beyond Pages show us that every planes in GUI is too thin to grasp with our hand.

Unformed Symbols overlaps the image trumps with the real trumps. Fujihata does not use the computer in this work, therefore this is not interactive one. However, this overlapping makes new thin world on the table top, then we experience more complex interaction in the physical law than the computer does. Jyunya Ishigami makes a big thin table. This table’s thinness is very strange which is very similar with Unformed Symbols. These works do not use the computer. However, more complex interaction happens in two works because of their own thin plane. The plane is not only the plane, but also the solid. We already meet this situation on the computer, but Fujihata’s Unformed Symbols and Ishigami’s Table make no gravity field in the real gravitational world. In this thin world, there is no difference between the plane and the solid.

I would like to call Fujihata an architect of the plane. Of course, Fujihata is an artist but I dare to say he is the architect in the superflat age. Fujihata anticipates the architects in the superflat movement. The superflat is the idea of Takashi Murakami in order to export the Japanese art into the world wide art market. Although the superflat comes from the Japanese subculture, this idea is also related with the computer’s flat world like GUI. We start to refer to computer engineers as IT architects, therefore I call Masaki Fujihata who knows the essence of the computer as the architect of the plane.

Wednesday, May 5, 2010

An intersubjective image

Software works make a situation; not 'it used to be there' but 'this is here now.' It is summoned to appear in the display by the computer.
http://twitter.com/mmmemoe/status/13298957788

The computer gives us our second language in order to communicate between the human and itself.
http://twitter.com/mmmemoe/status/13318381249

There is a possibility for new second language for the man and computer communication.
http://twitter.com/mmmemoe/status/13318619176

The picture and movie are the first visual language. The image based on the data which the computer makes is the second visual language.
http://twitter.com/mmmemoe/status/13318763196

The first visual language has only two categories: subjective image and objective one.
http://twitter.com/mmmemoe/status/13318901336

The second one has three categories: subjective image, objective image and intersubjective image which opens new thought.
http://twitter.com/mmmemoe/status/13318966373

An intersubjective image which is between individuals [human and human]: the photo and the movie.
http://twitter.com/mmmemoe/status/13351803960

An intersubjective image which is between spaces [human and computer]: the image based on the data.
http://twitter.com/mmmemoe/status/13351812840

This is a wrong question whether the computer is rational one or not.
http://twitter.com/mmmemoe/status/13352012710

This is a right question whether the image on the display made by the computer is rational one or not.
http://twitter.com/mmmemoe/status/13352044867

The computer need the image in order to make a thought.
http://twitter.com/mmmemoe/status/13352080365

Monday, February 8, 2010

Re-thinking the interface: What is the meaning of Alan Kay's"Doing with Images makes Symbols"

Alan Kay created a slogan for development of user interfaces: Doing with Images makes Symbols. This reveals the essence of the Graphical User Interface (GUI) which millions of people now use all over the world. This paper examines how GUI, based on Kay's slogan, has changed our communication with the computer. In order to explore this topic, I focus on Azuma and Turkle’s argument of "interface value" intuit for thinking about the relationship between our action and new sign surfaces. Next, Kay’s slogan is considered from the viewpoint of its formation process. Finally, I refer to Sperber and Wilson's Relevance theory. I conclude that GUI demands that we, along with computers, act for ostensive-inferential communication in a mutual cognitive environment.

Monday, January 11, 2010

What You See Is What You Touch? Double description of seeing and touching in ‘Display Acts’

There is, in fact, almost no formal theory dealing with analogue communication and, in particular, no equivalent of Information Theory or Logical Type Theory. This gap in formal knowledge is inconvenient when we leave the rarified world of logic and mathematics and come face to face with the phenomena of natural history. In the natural world, communication is rarely either purely digital or purely analogic.1
This is Gregory Bateson’s words. I consider that user interface has the same problem. Computer is digital. Computer is controlled by formal theory. However, We, human, is not purely digital. We, are analogic rather than digital. Therefore, user interface should be the place where purely digital computer becomes analogic one, and analogic human becomes digital one in order to communicate each other.

Display Acts
When we use a computer, what do we do? Almost all of us look at some images on an electric display, grab and move a mouse, and type on a keyboard, then our right hand holds the mouse in order to point to an image called an icon on the display. This is very 'natural' for us. If our body makes some actions with a plastic object, then the images on the electric display change. However, this relationship between our body and the image did not exist until the computer, and especially until the Graphical User Interface (GUI), appeared. I call this phenomenon 'Display Acts': action which is formed by connecting our body action with the change of images on the electric display.2 However, we do not understand this action well because we have only separately studied body actions and images in man-computer communication.

In order to make clear the relationship between our body action and images in the man-computer world, this presentation focuses on the cursor in the electric display and the mouse in the real world. We see the cursor in the electric display: The cursor is "a small mark on a computer screen that can be moved and that shows the position on the screen, where, for example, text will be added."3 Moreover, the shape of cursor is almost arrow (→) which is "used to show direction or position."4 We just see the arrow cursor in order to know where it is and point some images on the display. Then, we also hold the mouse.

Exemplification
What does the cursor show us? It shows us the position and direction, however, the cursor often points nothing. When the arrow cursor points nothing, it does not work. We don't care this. Why? Because we not only see the images in the display, but also grabbing the mouse. What is a relationship between seeing and grabbing? In order to make clear this relationship between the cursor and mouse, I refer to Nelson Goodman’s idea; Exemplification. According to Goodman;
Exemplification, though one of the most frequent and important functions of works of art, is the least noticed and understood. Not only some troubles about style but many futile debates over the symbolic character of art can be blamed on ignoring the lessons, readily learned from everyday cases of the relation of being-a-sample-of, that mere possession of a property does not amount to exemplification, that exemplification involves reference by what possesses to the property possessed, and thus that exemplification though obviously different from denotation (or description or representation) is no less a species of reference.5
Possession means that some samples show us their own properties, and we interpret their properties for our interests. The user interface is not work of art, but I consider that the images in the electrical images with the mouse or other input devices show us almost same functions of work of art.

We see the shape of cursor as a sample in order to know functions of cursor. We get a pointing function from the arrow shape, therefore, we label this pointing function to the arrow cursor. However, the arrow cursor points nothing. As a result, we think this arrow possess another functions; Maybe it is moving for pointing because the arrow is shape for pointing. Only when we see the electric display, we can understand that the functions of the cursor is only pointing. However, user interface has the mouse. We touch and grab the mouse, then we see that the cursor moves and point the image. This touching is another function of the cursor because the cursor is designed for connecting with the mouse in order to work in the display. Only when we see the display, we can just know pointing and moving functions of the cursor (fig. 1). Only when we touch the mouse, we can understand the cursor is able to touch the image and change images in the display (fig. 2). Only pointing can not change anything in the display. Changing needs touching. In short, we not only see the images on the display, but also grabbing the mouse when we use the computer. Therefore, we have to consider what a relationship between seeing and touching is.

            fig.1

            fig. 2
Moreover, the arrow cursor sometimes suddenly changes into the spinning wait cursor or something else. When we see transformations of the cursor's shape, the mouse we grasp does not change. However, the moment the arrow cursor changes into the spinning wait cursor, our action with the mouse changes. Before transformation, we use the arrow cursor and the mouse in order to point to an image on the display, and therefore the cursor exemplifies the function of pointing and our body action with the mouse is formed for pointing. After transformation, the spinning wait cursor exemplifies the function of, not pointing, but showing the computer status. As a result, our action with the mouse is not formed for pointing. After a moment, we realize that what we can do with the mouse in hand is just moving the cursor in the electric display. The spinning wait cursor exemplifies that the computer is still working, but it does not leave us alone; the cursor and the mouse are connecting, please wait. Throughout the whole process, we grasp the same plastic object, even though the cursor's shape is changing.

Double Description
In user interface, the image we see is changing while the object we touch is not changing. Moreover, this transformation of image is sometimes out of our control. But we naturally accept this phenomena. In order to make it clear, I refer Bateson’s Double Description. He quotes Shakespeare’s Macbeth and says that:
This literary example will serve for all those cases of double description in which data from two or more different senses are combined. Macbeth “proves” that the dagger is only an hallucination by checking with his sense of touch, but even that is not enough. Perhaps his eyes are “worth all the rest.” It is only when “gouts of blood” appear on the hallucinated dagger that he can dismiss the whole matter: “There’s no such thing.” 6
We need Double Description of seeing and touching in order to consider and prove what the user interface is. However, user interface mainly focuses on seeing. For example, What You See Is What You Get; WYSIWYG. From Wikipedia, the main attraction of WYSIWYG is the ability of the user to be visualize what he or she is producing.7 Here, ‘to be visualize’ is emphasized. One more example. Ben Shneiderman proposed a very important idea for user interface: Direct Manipulation. From Shneiderman, Direct Manipulation’s central idea is to be visibility of object of interest.8 Here, only visibility is emphasized.

I want to focus on Direct Manipulation from anther point of view. According to George Lakoff and Mark Johnson. Direct Manipulation leads us not only seeing but also touching because we see and touch the object in order to create something. And they say that the Direct Manipulation makes the concept of CAUSATION.9 Furthermore, Søren Pold connects GUI with the concept of CAUSATION. He says that WYSIWYG interface has many buttons, and buttons are essential part of controls in GUI10 and also continues that:
When pushing a button in a interface ─ that is, by movement of the mouse, directing the representation of one’s hand onto the representation of a button in the interface and activating a script by clicking or double-clicking ─ we somehow know we are in fact manipulating several layers of symbolic representation and,…… It is a software simulation of a function, and this simulation aims to hide its mediated character and acts as if the function were natural or mechanical in a straight cause-and-effect relation.11
According to Pold, pushing button in the user interface makes a straight cause-and-effect relation in the electric display, although this cause-and effect appears by hiding manipulating several layers of symbolic representation.

Another view of the cursor and mouse. A media artist, Masaki Fujihata says computer does not have a question of materiality.12 This means that computer has different principle from us because we live in the material world. Fujihata considers that if the repeated experience are provided us, through interactive experience, Images are made into object.13 According to artist’s intuition, we go through new phenomena: Image are made into object.

What You See Is What You Touch?
However, this ‘object’ in the display is not real object. According to Pold, this ‘object’ in the display shows us cause-and-effect, therefore, we feel as if we directly manipulate it by our own hands. On the other side, Fujihata says that the ‘object’ does not have its own materiality. We, of course, see the ‘object’ but can we touch it? Maybe, yes. Maybe, no. I consider that this ‘object’ is something like “switch” as Bateson defined;
We do not notice that the concept “switch” is of quite a different order from the concepts “stone,” “table,” and the like. Closer examination shows that the switch, considered as a part of an electric circuit, does not exist when it is in the on position. From the point of view of the circuit, it is not different from the conducting wire which leads to it and the wire which leads away from it. It is merely “more conductor.” Conversely, but similarly when the switch is off, it does not exist from the point of view of the circuit. It is nothing, a gap between two conductors which, themselves exist only as conductors when the switch is on. In other words, the switch is not except at the moments of its change of setting, and the concept “switch” has thus a special relation to time. It is related to the notion “change” rather than to the notion “object.” 14
Like the concept ‘switch’, the ‘object’ is related to the notion “change” rather than to the notion “object”. The cursor is given a special relation to time by the mouse.

Why is this happen? Because the computer is purely logic machine which is made by analogic human in cause-and-effect world; “the if ... then of logic in the syllogism is very different from the if ... then of cause and effect.” 15 Bateson makes a question: Can logic simulate all sequences of cause-and-effect? His answer is maybe no.16 The reason is that “the if ... then of causality contains time, but the if ... then of logic is timeless. It follows that logic is an incomplete model of causality.”17 However, we want to re-made the computer as cause-and-effect machine via Double Description of seeing and touching. Introducing the special relation to time with the mouse, user interface makes new images with coupling cause-and-effect and logic in the electric display. We are making pseudo-cause-and-effect with logic in the user interface. Therefore, we touch the same plastic object, even though the ‘object’ in the display is changing.

                                             fig.3
Returning to the cursor and the mouse. The cursor looks like showing cause-and-effect, however, it stands for the logic in the computer: there is no time. But, we live in time: cause-and-effect world. Pold tells us that the user interface make us believe there is a straight cause-and-effect relation, even though this cause-and effect appears by hiding manipulating several layers of symbolic representation. On the other hand, Fujihata says that if we see cause-and-effect behind the image, then we can touch it. Almost all us do not understand what the logic in the computer is, but we use it with the display, the keyboard, the mouse, and something else. The reason is that we believe that the computer is not only logic, nor only cause-and-effect, but just ‘something’. Therefore, when we consider about user interface, we should not confuse ‘if ... then of logic’ with ‘if ... then of cause-and-effect’. We see the change of image in the electric display; it shows us the logic in the computer but we feel something like cause-and-effect, which is made from computer’s pure logic. We touch the same object while the image is changing: our grasping mouse introduces the special relation of time to the computer and makes pseudo-cause-and-effect in the computer (fig. 3). Therefore, only seeing is not enough and only touching is neither. In order to made clear ‘something’; the pseudo-cause-and-effect with logic, we have to consider the double description of seeing and touching in the user interface. In short, the computer has changed the relationship between our seeing and touching. It means that the computer changes our notion and experience for touching object: Touching not “object” but “change”.

References
1 Gregory Bateson, “Steps to an Ecology of Mind”, The University of Chicago Press, 2000, p.291.
2 Masanori Mizuno, “The formative process of “Display Acts” on the establishment of GUI”, Doctor Thesis,2009, p.3.
3 Oxford Advanced Learner’s Dictionary (electric version), Oxford University Press, 2000
4 Ibid.
5 Nelson Goodman, “Way of Worldmaking”, Hackett Publishing Company, 1978, p.32.
6 Gregory Bateson, “Mind and Nature”, Wildwood House, 1979, p.73.
7 Wikipedia, http://en.wikipedia.org/wiki/WYSIWYG (9, April 2009 access).
8 Ben Shneiderman, ‘Direct Manipulation: A Step Beyond Programming Languages’ in Noah Wardrip-Fruin
and Nick Montfort ed., “The New Media Reader”, MIT Press, 2003, p.486.
9 George Lakoff & Mark Johnson, “Metaphor We Live By”, The University of Chicago Press, 2003, pp.75-76.
10 Søren Pold, ‘Button’ in Matthew Fuller ed., “Software Studies: a lexicon”, MIT Press, 2008, p.31.
11 ibid., p.33.
12 Masaki Fujihata, “Masaki Fujihata: The Conquest of Imperfection ─ New Realities Created with Images and Media”, Center for Contemporary Graphic Art, 2007, p.178.
13 ibid., p.178.
14 Bateson (1979), pp.108-109.
15 ibid., p.58.
16 ibid., p.58.
17 ibid., p.59.

Origina text: Proceedings of 4th International Symposium on Multi-Sensory Design