Final Prototype | Processing

I have now finished my interactive piece. The piece is set to a low frame rate to match its similarity of a CCTV camera, and appears filtered in grey. With a maximum limit of 4 faces being detected, as users walk by, it blurs out their face. Below is the code for my program:

//import libraries
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

//PImages to store captured faces
PImage face0 = createImage(0, 0, RGB);
PImage face1 = createImage(0, 0, RGB);
PImage face2 = createImage(0, 0, RGB);
PImage face3 = createImage(0, 0, RGB);


void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 20);
 opencv = new OpenCV(this, 640, 480);
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); 
 frameRate(3);
 video.start();
 smooth();
}

void draw() {
 //need to scale up to make the image bigger at some point
 scale(1);

 //display video feed
 opencv.loadImage(video);
 image(video, 0, 0 );
 filter(GRAY);

 //array to store the faces
 Rectangle[] faces = opencv.detect();
 println(faces.length);

 
 if (faces.length >= 1) {
 face0 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
 face0.filter(BLUR, 6);
 image(face0, faces[0].x, faces[0].y);
 if (faces.length >= 2) {
 face1 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);
 face1.filter(BLUR, 6);
 image(face1, faces[1].x, faces[1].y);
 if (faces.length >= 3) {
 face2 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
 face2.filter(BLUR, 6);
 image(face2, faces[2].x, faces[2].y);
 if (faces.length >= 4) {
 face3 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);
 face3.filter(BLUR, 6);
 image(face3, faces[3].x, faces[3].y);
 }
 }
 }
 }
} // close draw


void captureEvent(Capture c) {
 c.read();
}

The problem I had last time with the filter blur covering the whole screen instead of just the rectangle placed around the detected face has been resolved. What was wrong with my code to start that I had originally wasn’t able to redraw the face that had been recognised because it didn’t have anything that commanded it to do so. With the new code, the ‘get’ functions that are applied to every ‘if’ statement allowed the images stored in the PImage arrays to redraw the image back into the space with the blur effect added to it. The code I had previously was able to apply a masked image from the PImage in the void setup to be applied to the facial detection space but wasn’t able to apply the blur. With a point in the right direction by another student, I was able to understand the nature of how this newly discovered code would help me in achieving my idea. I now have a clear understanding of how this kind of function works within processing.

References:

http://www.kurtstubbings.com/blog/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s