final code | Processing

Below is a copy of my final program code.


//import libraries
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

//PImages to store captured faces
PImage face0 = createImage(0, 0, RGB);
PImage face1 = createImage(0, 0, RGB);
PImage face2 = createImage(0, 0, RGB);
PImage face3 = createImage(0, 0, RGB);
void setup() {
size(640, 480);
video = new Capture(this, 640, 480, 20);
opencv = new OpenCV(this, 640, 480);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
frameRate(6);
video.start();
smooth();
}

void draw() {
//need to scale up to make the image bigger at some point
scale(1);

//display video feed
opencv.loadImage(video);
image(video, 0, 0 );
filter(GRAY);

//array to store the faces
Rectangle[] faces = opencv.detect();
println(faces.length);
if (faces.length >= 1) {
face0 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
face0.filter(BLUR, 6);
image(face0, faces[0].x, faces[0].y);
if (faces.length >= 2) {
face1 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);
face1.filter(BLUR, 6);
image(face1, faces[1].x, faces[1].y);
if (faces.length >= 3) {
face2 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
face2.filter(BLUR, 6);
image(face2, faces[2].x, faces[2].y);
if (faces.length >= 4) {
face3 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);
face3.filter(BLUR, 6);
image(face3, faces[3].x, faces[3].y);
}
}
}
}
} // close draw
void captureEvent(Capture c) {
c.read();
}

Advertisements

Evaluation | Thoughts about processing

In conclusion to my journey with processing, I would say that I have quite enjoyed the learning side of this unit. Although at times the code provided to me through different sites and different workshops proved hard to work out and sometimes left me frustrated, I was very happy with my overall outcome. I have a much clearer understanding of the way in which the code works, and even if I cant understand character in a chunk of code, I can usually still be able to explain what its purpose and functions are.

As with any piece of work I do, I always see room for improvement in projects that have been made solely for testing purposes. In this piece of work, I wish I had more time to perfect the mask applied back onto the facial recognition space. I would have also loved to resolve the FPS problem I had, it was such a  shame that even with such a low frame rate set, the piece tended to crash occasionally.

I think in the future I will continue to read about processing in order to achieve a clearer understanding, as my understanding has limited me in some ways during this project.

Improvements and feedback from users | User Testing

After testing my program in the foyer, I wanted to get a little bit of feedback from a user. I wanted to get feedback that related to the meaning of my program, as well as the improvements that could be made and the users positive thoughts. I wrote out five questions for one user to answer. The answers are as seen below:

Q. do you understand the concept behind this piece of interactive media?

A. Yes I believe the concept portrayed was clear.

Q. How does this piece make you feel?

A. The piece made me feel anonymous, as the face is the most important feature in distinguishing one person from another I felt stripped of my individualism.

Q. do you see any further development ideas you’d like to discuss?

A. I believe a time code and the edges of the blur needed to be softened.

Q. What did you find yourself doing while interacting with this piece of media?

A. At first, trying to figure out what was happening then secondly how much it would follow me.

Out of all the questions asked, the response I was most happy to receive was the fact that the user felt as if their identity had been stripped from them. The ideology of the face being the most important feature in terms of distinguishing us from one another is exactly what I wanted the user to feel, and I think Ive achieved just that.

Another user after interacting with it said that they thought it was actually a real surveillance camera, and that they only realised it was an interactive piece when they came up close and realised the effect of the blur. Below is a video of someone interacting with my work on their way into the foyer through the north entrance.

Unresolvable Problems | User Testing

While I was setting up my program, a fellow student (shown in the image above) is seen on his mobile phone but his face is not detected, unlike the student seen walking through the foyer with his face being picked up and blurred out. This problem came to my attention as not being recognised because the library I was using was not suited for faces from this specific angle. As seen in the video below, I previously knew the limits of the angle of the detections of faces through home testing.

I also realised that the rotation of the face was unrecognisable, which I’m sure was fixable but in all honesty I had no idea about how to go about it. I read through a few pages on processing and I couldn’t find anything in relation to this problem using OpenCV, so i decided to leave it alone. The video below shows the limits of facial rotation with my program.

User Testing | Frame rate drops and unwanted faces

While setting up my program, The space around me proved well lit with very little interruption. One of the problems I came across was a small warning sign in the background which kept getting picked up as a face. The warning sign was placed on the automatic opening doors to the left of the display, at the entrance of the foyer. There wasn’t much I could do about this mishap because the warning sign was there for a reason and I couldn’t remove it or block it out as I would be blocking out its purpose, but because my program enables 4 users to interact with it at once, it didn’t prove much of a problem unless more than 3 people were trying to interact with it at once. I also found that even though my program ran on an extremely low frame rate anyway, it still tended to fall drastically lower than what I had set the frame rate to be originally. This was because as the faces were detected and re-detected, the program had to constantly re-draw each face back onto the selected area. At one stage, the program even crashed due to the amount of faces it was randomly picking up. This never happened in the testing I’d done at home because I had controlled the amount of people coming into frame. When testing at home I realised a drop in frame rate with the maximum amount of faces, But this was before I reduced the frame rate in the setup to make feel like a CCTV camera. In all honesty, I thought this would improve my overall outcome because it would leave the program with lighter workload, but I was proved wrong. Below is a video example of me testing the program at home with just my face to give you an idea of the frame rate I wanted to have.

In my next post I will talk about another issue I had with my programme that was unresolvable, but didn’t prove to be too much of a problem in terms of testing.

My Program | responses and reactions

Shortly after setting up my program in the foyer, I got responses that I expected from passers by. One passer by even stopped to check to see if it was a program error by stopping briefly and waving at the camera. The response I received from people passing through the foyer reflected on peoples reactions similar to those of children on CCTV cameras in shops. The reason this sprung to mind was the overall inevitable outcomes that the user expects – when people are in shops, they know that moving their bodies will reflect on their appearance on the screen, but the question is, why do they do it if they know the outcomes?

Testing my prototype | Processing

Today I have been using the display screen of my choice to enable other students and teachers to interact with my work. I set up my work In the evening period at roughly 4pm, so unfortunately there weren’t as many people in the foyer as I’d like there to have been, but they still interacted with piece as they walked by. The picture above is of the setup I provided within the foyer. I used a thunderbolt to hdmi adapter to connect my Macbook Pro to the TV display screen in the lobby. I needed to change The display sizes on my program in order for it to be displayed properly on the screen. Below is the code I used to display my work.


  size(displayWidth, displayHeight);
  video = new Capture(this, displayWidth, displayHeight, "HD Pro Webcam C920", );
  opencv = new OpenCV(this, video.width, video.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
frameRate(6);
  video.start();

With the sizes altered to cater for the display screen along with the registration of the webcam and a resolution change on my Mac, my program was able to be shown properly in the foyer. My next post will focus on the reactions of users with my program, and how they responded to it.