Friday, 9 January 2009

Experimental Media-The Neovision

It's been time when I was mostly focused on the actual design of 'Neovision', not just its functionality. The idea was to build three devices onto a glove. I am using the ultrasonic distance detector, wireless camera and Arduino board. This is how the glove looks like: 

Now I am making it all working. At the moment the Max/MSP patch is receiving the distance form ultrasonic sensor and that triggers the camera to turn on and off. Soon in the next post I will be ready with the patch. Or at least I hope so.

Thursday, 8 January 2009

time-based imaging-the Eyesweb patch



This is the patch I am working on. It incorporates three patches combined into one. The first part consists of pitch recognition. The sound comes from a microphone and is forwarded to the rescaler and next to the Free Frame Plugin. The second part combines three boxes that are used in Canny Corner Detection Patch. The third and last chunk consists of three RGB channel extractors whereas each of the blocks is signed up to a sepparate colour. The following three blocks are threshold with intervals. Each colour is delayed just at the exit(display) by Queue blocks.

Saturday, 3 January 2009

Time-based imaging

Hi, Christmas time was a busy time to my and I came up with ideas that wrap up my work in this semester. I was working on the idea of motion that is dependable on the sound. I was trying to match the volume of sound with a range of equalizers I created from lined up bottles or 'fruit-tella' candies. My first attempts were not satisfying enough because of amount of work during the process and tackiness of the final artefact. That is why I was looking for an easier way to achieve the right quality of the final artefact.

   First of all, I purchased a decent equipment - Mini Wireless Bird Box Camera. Thanks to its possibilities and the fact that its wireless, it gives me the quality and flexibility I was loking for.

Second of all, I came up with an idea of more realistic equalizer. I am building a patch in Eyesweb that will read the volume of sound coming from the microphone of my camera and apply its values to the picture. The camera will be taking a photo every 5 seconds in 6 hours long session. Depending on the volume of sound, the images that are taken in real time will be layered with images taken in the past. Also, I have to add some kind of long-exposure effect so the pictures are not too blurry. I am about to film the scenes in the club so that the camera will capture human movement - people dancing on the dancefloor. I already have the permission to do that and finalizing the Eyesweb patch to be ready for the next week.