Sunday, 30 November 2008

Time-based imaging project part2

Few weeks ago I have seen a Live Action Animation "Tango" by Zbigniew Rybczynski. This Polish artist's video had a great impact on me. It is showing a single small space - a room - where 36 characters, in different stages of life, are interacting. It would not be possible to have this much action happening in one room so Rybczynski came up with an unique idea. This is how he explains his idea: 'I had to draw and paint about 16.000 cell-mattes, and make several hundred thousand exposures on an optical printer. It took a full seven months, sixteen hours per day, to make the piece.' http://www.zbigvision.com


I was deeply inspired by Zbig Rybczynski's "Tango" and so wanted to include this inspiration in my project. The fact is that he is using layers on layers of animation. His piece is timeless. It remeinded me of other, more contemporary artists. Semiconductor is a team of artists who experiment with digital animation and video-editing. 'Semiconductor make moving image works which reveal our physical world in flux; cities in motion, shifting landscapes and systems in chaos. Since 1999 UK artists Ruth Jarman and Joe Gerhardt have worked with digital animation to transcend the constraints of time, scale and natural forces; they explore the world beyond human experience, questioning our very existence.' http://www.semiconductorfilms.com. I believe, they are using layer-on-layer animation in one of their piece called 'Earth Moves'. This is few snapshots from their website:

I am certain that these inspirations will find place in my project. My idea is to use a statis camera and record a chosen location so to use it as a background of my video. The foreground will be a static image edited frame by frame in photoshop. I am thinking of a city location and using an image of a building in the foreground. Like in the last picture above, I will edit the building 30 times for each second. I am not planing to spend full seven months of editing as Rybczynski did :-) Thanks to mathematical precision of Photoshop effects I can be much quicker in editing.

Topic: Time-based Imaging project part1

This time about my Time-based project. It has been one month since I am working on my ideas for the project. In that time I focused on many aspects of video-making and video-editing proceses. But it is just now when I have an idea worth of realizing on a big scale. I am posting here my recent videos and I will give a short description of the methods I used. This is what I came up with in recent weeks:  

As you can see, I put more importance on pictures and editing in my project than on moving image. This theme is the crucial part of my project as my final artefact is basing on this technique of video-making. I was happy from the final effect but there is something I would like to work on. Precisely, the fact that the whole scene is seen from one angle and the camera is static. My next approach was to make the camera moving around the object. In order to do that, I had to find out the shooting technique that will give me a control over the camera and its angles. The idea was to come up with the final product that will use the same technique as in video above and at the same time be relevant to modern shooting techniques. My next artefact was just an attempt using a phone camera and a shaby installation.

Sunday, 23 November 2008

Details

Hi everyone! Harder, faster, better... Recently nothing else matters and still...Time does no exceptions and that is why I would like you to inform that I closer every day to finish my Experimental Media project. And so, this time I can show you more detailed description of what I am building. 

The idea is to have four URM37 V3.2 Ultrasonic Sensors attached to fingers and a tiny camera attached to palm. All sewed in to a glove. The user is able to observe the world around him wearing this glove. Primarily, I wanted the user to see the world by wearing this glove and seeing things wearing a visor on his head. This idea came from Char Davies's projects where there is an image of a person wearing a visor connected to the rest of the equipment with cables. Obviously, this approach would be too expensive so in this case a screen will do. 

In my previous post I was explaining the ideological background of my project. Like I said, human hand is the medium of recognition in my project. World and materials within it are changing thanks to human touch sight. The user will be able to explore the matter thanks to its virtual representation. A person wearing this glove is seeing the world changing via touch.  

My plan is to use Max/MSP Jitter and Arduino. Two platforms speaking to each other by sending data. The URM37 is sending values of a distance between the hand and an object to Max/MSP Jitter. Jitter and its visual effects is there to manipulate the picture. At this stage I am working on special effects in Jitter. I want to have as many effects as I can get so that the user will be surprised by the vast diversity of virtual representations.

Now, a short description of how I am dealing with the project:

First of all, I have bought two URM37 V3.2 paying approx. 10 pounds for each. It was hard to find the 'Parallax' Ultrasonic Sensor on Ebay but I thought that buying URM37 is a better option because of its additional ability to recognize the temperature changes. That is the link to its preferences: www.yerobot.com/download/mannual/URM3.2%20Mannual.pdf

Everything looked good until I had to connect the URM to Arduino board. With the Parallax it is a piece of cake becuase its got three pins and one cannot be wrong with plugging this thing in. Here, I had to research on URM as its chip has got nine pins. The proper configuration found on: http://www.yerobot.com/forum/viewtopic.php?f=5&t=7&p=10&sid=72f4c2fbb84bf3

The next part is the Max/MSP Jitter patch that recognizes different values of distance. This patch has been downloaded from Lecture 6 of Experimental Media uploaded on TVU Blackboard thanks to Richard Colson: