Friday, 9 January 2009

Experimental Media-The Neovision

It's been time when I was mostly focused on the actual design of 'Neovision', not just its functionality. The idea was to build three devices onto a glove. I am using the ultrasonic distance detector, wireless camera and Arduino board. This is how the glove looks like: 

Now I am making it all working. At the moment the Max/MSP patch is receiving the distance form ultrasonic sensor and that triggers the camera to turn on and off. Soon in the next post I will be ready with the patch. Or at least I hope so.

Thursday, 8 January 2009

time-based imaging-the Eyesweb patch



This is the patch I am working on. It incorporates three patches combined into one. The first part consists of pitch recognition. The sound comes from a microphone and is forwarded to the rescaler and next to the Free Frame Plugin. The second part combines three boxes that are used in Canny Corner Detection Patch. The third and last chunk consists of three RGB channel extractors whereas each of the blocks is signed up to a sepparate colour. The following three blocks are threshold with intervals. Each colour is delayed just at the exit(display) by Queue blocks.

Saturday, 3 January 2009

Time-based imaging

Hi, Christmas time was a busy time to my and I came up with ideas that wrap up my work in this semester. I was working on the idea of motion that is dependable on the sound. I was trying to match the volume of sound with a range of equalizers I created from lined up bottles or 'fruit-tella' candies. My first attempts were not satisfying enough because of amount of work during the process and tackiness of the final artefact. That is why I was looking for an easier way to achieve the right quality of the final artefact.

   First of all, I purchased a decent equipment - Mini Wireless Bird Box Camera. Thanks to its possibilities and the fact that its wireless, it gives me the quality and flexibility I was loking for.

Second of all, I came up with an idea of more realistic equalizer. I am building a patch in Eyesweb that will read the volume of sound coming from the microphone of my camera and apply its values to the picture. The camera will be taking a photo every 5 seconds in 6 hours long session. Depending on the volume of sound, the images that are taken in real time will be layered with images taken in the past. Also, I have to add some kind of long-exposure effect so the pictures are not too blurry. I am about to film the scenes in the club so that the camera will capture human movement - people dancing on the dancefloor. I already have the permission to do that and finalizing the Eyesweb patch to be ready for the next week.  

Wednesday, 10 December 2008

Motion tracking and object recognition

This week I am researching on the possibilities of using motion tracking or object recognition in my project. In short, I want the camera to recognize people and their movement so that their postures will not be affected by special effects. Only the background is going to morph. This technique is possible to achieve in Max/MSP Jitter thanks to 'cv.jit' library. It is all fine except one thing - I do not posses the full version of Jitter which is needed for the library to work. Still, no problem - I can always work in TVU facilities where the full version of Jitter is installed on Mac's. In the meantime, I am researching on similar techniques to cv.jit or an equivalent. 

This is what I have found browsing the web. It is called CamSpace and is a great software to play games without controllers such as joysticks, mouse, keyboard or pad. All you need to controll anything in your PC is a camera. This software is recognizing the objects seen by the camera thanks to colour recognition. Anything in front of the camera can become a controller. This is awesome! This invention can be expanded and used in many different ways, not just to play games. But enough reading. See it yourself on their website.It is also ready to be installed on any PC for free. 


I found more on this topic of motion tracking. Here is a blog on which the creator is explaining and demonstrating how we can play 'ping pong' live video tracking and a microphone.

Monday, 8 December 2008

Time-based imaging project part3

Here are some videos and tutorials about high quality timelapse effects. Remarkably cheap setup allows to capture high quality images and then create a movie from a series of images. The tutorial and videos are described on 
And here is an amazing time lapse video I spotted on YouTube

Thursday, 4 December 2008

The design of the glove

While working on the effects I am trying to apply in my project, more and more I am also thinking about the design of the glove. A good example of what I am up to is the scene from 'Minority Report' movie. As I am a fan of Philip K. Dick's film adaptations, I like the design of gloves in that movie. 

What interests me more, is the idea of lights on top of the fingers. Now, I am thinking wheteher it is possible to have a pair of gloves. On one hand, the camera with the ultrasonic sensor, and on the other a big multi-coloured LED in the palm of a hand. When both gloves/hands are close to each other, the one with the LED on can represent the object in a different light and thanks to that change the effect seen on the screen. I came up with this idea also thanks to 'jit.chromakey' patch in Max/MSP Jitter. Chromakeying is the method of masking out one colour, with a tolerance, and replacing it with other data/footage. The colour can be altered so I am planing to take a closer look on that. 

Sunday, 30 November 2008

Time-based imaging project part2

Few weeks ago I have seen a Live Action Animation "Tango" by Zbigniew Rybczynski. This Polish artist's video had a great impact on me. It is showing a single small space - a room - where 36 characters, in different stages of life, are interacting. It would not be possible to have this much action happening in one room so Rybczynski came up with an unique idea. This is how he explains his idea: 'I had to draw and paint about 16.000 cell-mattes, and make several hundred thousand exposures on an optical printer. It took a full seven months, sixteen hours per day, to make the piece.' http://www.zbigvision.com


I was deeply inspired by Zbig Rybczynski's "Tango" and so wanted to include this inspiration in my project. The fact is that he is using layers on layers of animation. His piece is timeless. It remeinded me of other, more contemporary artists. Semiconductor is a team of artists who experiment with digital animation and video-editing. 'Semiconductor make moving image works which reveal our physical world in flux; cities in motion, shifting landscapes and systems in chaos. Since 1999 UK artists Ruth Jarman and Joe Gerhardt have worked with digital animation to transcend the constraints of time, scale and natural forces; they explore the world beyond human experience, questioning our very existence.' http://www.semiconductorfilms.com. I believe, they are using layer-on-layer animation in one of their piece called 'Earth Moves'. This is few snapshots from their website:

I am certain that these inspirations will find place in my project. My idea is to use a statis camera and record a chosen location so to use it as a background of my video. The foreground will be a static image edited frame by frame in photoshop. I am thinking of a city location and using an image of a building in the foreground. Like in the last picture above, I will edit the building 30 times for each second. I am not planing to spend full seven months of editing as Rybczynski did :-) Thanks to mathematical precision of Photoshop effects I can be much quicker in editing.