Camera Interaction

As discussed in my previous post for the next stage in my project I decided to look into various methods of interacting with the camera using Processing, as this will be a major part of the requirements for the project. I thought an obvious first step in starting this would be to look at Processing’s own camera functions. Processing’s camera functions can be accessed with the lines:

Import processing video

import processing.video.*;

Capture cam;


void setup(){
 cam = new Capture(this, cameras[0]);
    cam.start(); 

}

void draw(){
 if (cam.available() == true) {
    cam.read();
  }
  image(cam, 0, 0);
}

After importing Processing’s camera library, these lines read the input from a camera and use the image function to display the images in the void draw function. As the main way to create interactivity with the camera is to get information from these images that are being displayed and use the data from this to manipulate other elements.To test how to get information from this function I created some Processing sketches.

camera2
camera
The above sketch grabs a the colour value from a pixel where mouse is located and assigns the colour to the rectangle in the bottom right.

In order to create this sketch I used techniques similar to the one I used previously in my testing of Processing’s PImage functions, where the colour from a pixel in an image can be accessed by the line image.get(pixelX, pixelY)

  color c = cam.get(mouseX, mouseY);
  fill(c);
  rect(width/2, height/2, width/2+200, height/2+300);

In the actual sketch I used the lines above where image comes from the camera’s image and the pixel values are substituted with the values from the mouse’s X and Y location. Learning this was very useful as it can be applied to a lot of different applications and create a lot of different effects.

motiondetection
Sketch by Daniel Shiffman. The sketch uses the camera to detect motion and fills in areas of motion with black.

One of the main ways pixel colour is used is in motion detection. In the above sketch by Daniel Shiffman he logs the colour of pixels each frame and compares it to the previous one. If the pixel colour is different (ie: something has moved in front of the camera) then the pixel is colour white when presented on screen whilst all the “static” pixels are coloured in white. I found the method he uses for this is pretty interesting. Whilst looping through all the pixels in a frame, Shiffman creates 3D co-ordinates using the RGB vaules to create the x y and z values. Then by using the dist() function, which is usually used for calculating distance between two points, Shiffman compares the two pixels from the previous and current frame. From this he can get a float value, which if it is higher than a certain threshold, is a “motion” pixel and assigned a black colour.

The main advantage of using a technique like this in my sketch would be that it gives a very detailed map of motion as illustrated in the above image. However as most of my ideas are based around detecting specifically human subjects, it might not be very suitable. It would be hard to detect if the motion was coming from a human or another source and so looking into other methods of camera interaction would be preferable. Also in a busy area it would detect a lot of pixels as motion pixels, sometimes being all across the screen, which would make it hard to make object that are meant to be influenced by these pixels to go in one direction.

followhand
Sketch by Daniel Shiffman. The sketch finds the average location of the motion pixels and displays an ellipse there.

The above sketch would be far more useful to my project as it only gives one “output” of data from the sketch. However during testing it was quite hard to control accurately and the sketch has some of the same problems as the previous one, wherein a busy environment would give a lot of different data, which makes the ellipse jump around a lot.

I will create some prototypes to see if this sketch could be modified to use with my gravity sketches as this could be used as a base for my main Processing sketch if successful.

Shiffman, D. 2008. Learning Processing: CHAPTER 16: EXAMPLE 16-13: SIMPLE MOTION DETECTION [online]. Available from: http://www.learningprocessing.com/examples/chapter-16/example-16-13/ [Accessed 07.12.2014]

Shiffman, D. 2008. Learning Processing: CHAPTER 16: EXERCISE 16-7: AVERAGE LOCATION OF MOTION [online]. Available from: http://www.learningprocessing.com/examples/chapter-16/example-16-13/ [Accessed 07.12.2014]