final code | Processing

Below is a copy of my final program code.


//import libraries
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

//PImages to store captured faces
PImage face0 = createImage(0, 0, RGB);
PImage face1 = createImage(0, 0, RGB);
PImage face2 = createImage(0, 0, RGB);
PImage face3 = createImage(0, 0, RGB);
void setup() {
size(640, 480);
video = new Capture(this, 640, 480, 20);
opencv = new OpenCV(this, 640, 480);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
frameRate(6);
video.start();
smooth();
}

void draw() {
//need to scale up to make the image bigger at some point
scale(1);

//display video feed
opencv.loadImage(video);
image(video, 0, 0 );
filter(GRAY);

//array to store the faces
Rectangle[] faces = opencv.detect();
println(faces.length);
if (faces.length >= 1) {
face0 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
face0.filter(BLUR, 6);
image(face0, faces[0].x, faces[0].y);
if (faces.length >= 2) {
face1 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);
face1.filter(BLUR, 6);
image(face1, faces[1].x, faces[1].y);
if (faces.length >= 3) {
face2 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
face2.filter(BLUR, 6);
image(face2, faces[2].x, faces[2].y);
if (faces.length >= 4) {
face3 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);
face3.filter(BLUR, 6);
image(face3, faces[3].x, faces[3].y);
}
}
}
}
} // close draw
void captureEvent(Capture c) {
c.read();
}

Evaluation | Thoughts about processing

In conclusion to my journey with processing, I would say that I have quite enjoyed the learning side of this unit. Although at times the code provided to me through different sites and different workshops proved hard to work out and sometimes left me frustrated, I was very happy with my overall outcome. I have a much clearer understanding of the way in which the code works, and even if I cant understand character in a chunk of code, I can usually still be able to explain what its purpose and functions are.

As with any piece of work I do, I always see room for improvement in projects that have been made solely for testing purposes. In this piece of work, I wish I had more time to perfect the mask applied back onto the facial recognition space. I would have also loved to resolve the FPS problem I had, it was such a  shame that even with such a low frame rate set, the piece tended to crash occasionally.

I think in the future I will continue to read about processing in order to achieve a clearer understanding, as my understanding has limited me in some ways during this project.

Improvements and feedback from users | User Testing

After testing my program in the foyer, I wanted to get a little bit of feedback from a user. I wanted to get feedback that related to the meaning of my program, as well as the improvements that could be made and the users positive thoughts. I wrote out five questions for one user to answer. The answers are as seen below:

Q. do you understand the concept behind this piece of interactive media?

A. Yes I believe the concept portrayed was clear.

Q. How does this piece make you feel?

A. The piece made me feel anonymous, as the face is the most important feature in distinguishing one person from another I felt stripped of my individualism.

Q. do you see any further development ideas you’d like to discuss?

A. I believe a time code and the edges of the blur needed to be softened.

Q. What did you find yourself doing while interacting with this piece of media?

A. At first, trying to figure out what was happening then secondly how much it would follow me.

Out of all the questions asked, the response I was most happy to receive was the fact that the user felt as if their identity had been stripped from them. The ideology of the face being the most important feature in terms of distinguishing us from one another is exactly what I wanted the user to feel, and I think Ive achieved just that.

Another user after interacting with it said that they thought it was actually a real surveillance camera, and that they only realised it was an interactive piece when they came up close and realised the effect of the blur. Below is a video of someone interacting with my work on their way into the foyer through the north entrance.

Unresolvable Problems | User Testing

While I was setting up my program, a fellow student (shown in the image above) is seen on his mobile phone but his face is not detected, unlike the student seen walking through the foyer with his face being picked up and blurred out. This problem came to my attention as not being recognised because the library I was using was not suited for faces from this specific angle. As seen in the video below, I previously knew the limits of the angle of the detections of faces through home testing.

I also realised that the rotation of the face was unrecognisable, which I’m sure was fixable but in all honesty I had no idea about how to go about it. I read through a few pages on processing and I couldn’t find anything in relation to this problem using OpenCV, so i decided to leave it alone. The video below shows the limits of facial rotation with my program.

User Testing | Frame rate drops and unwanted faces

While setting up my program, The space around me proved well lit with very little interruption. One of the problems I came across was a small warning sign in the background which kept getting picked up as a face. The warning sign was placed on the automatic opening doors to the left of the display, at the entrance of the foyer. There wasn’t much I could do about this mishap because the warning sign was there for a reason and I couldn’t remove it or block it out as I would be blocking out its purpose, but because my program enables 4 users to interact with it at once, it didn’t prove much of a problem unless more than 3 people were trying to interact with it at once. I also found that even though my program ran on an extremely low frame rate anyway, it still tended to fall drastically lower than what I had set the frame rate to be originally. This was because as the faces were detected and re-detected, the program had to constantly re-draw each face back onto the selected area. At one stage, the program even crashed due to the amount of faces it was randomly picking up. This never happened in the testing I’d done at home because I had controlled the amount of people coming into frame. When testing at home I realised a drop in frame rate with the maximum amount of faces, But this was before I reduced the frame rate in the setup to make feel like a CCTV camera. In all honesty, I thought this would improve my overall outcome because it would leave the program with lighter workload, but I was proved wrong. Below is a video example of me testing the program at home with just my face to give you an idea of the frame rate I wanted to have.

In my next post I will talk about another issue I had with my programme that was unresolvable, but didn’t prove to be too much of a problem in terms of testing.

My Program | responses and reactions

Shortly after setting up my program in the foyer, I got responses that I expected from passers by. One passer by even stopped to check to see if it was a program error by stopping briefly and waving at the camera. The response I received from people passing through the foyer reflected on peoples reactions similar to those of children on CCTV cameras in shops. The reason this sprung to mind was the overall inevitable outcomes that the user expects – when people are in shops, they know that moving their bodies will reflect on their appearance on the screen, but the question is, why do they do it if they know the outcomes?

Testing my prototype | Processing

Today I have been using the display screen of my choice to enable other students and teachers to interact with my work. I set up my work In the evening period at roughly 4pm, so unfortunately there weren’t as many people in the foyer as I’d like there to have been, but they still interacted with piece as they walked by. The picture above is of the setup I provided within the foyer. I used a thunderbolt to hdmi adapter to connect my Macbook Pro to the TV display screen in the lobby. I needed to change The display sizes on my program in order for it to be displayed properly on the screen. Below is the code I used to display my work.


  size(displayWidth, displayHeight);
  video = new Capture(this, displayWidth, displayHeight, "HD Pro Webcam C920", );
  opencv = new OpenCV(this, video.width, video.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
frameRate(6);
  video.start();

With the sizes altered to cater for the display screen along with the registration of the webcam and a resolution change on my Mac, my program was able to be shown properly in the foyer. My next post will focus on the reactions of users with my program, and how they responded to it.

Final Prototype | Processing

I have now finished my interactive piece. The piece is set to a low frame rate to match its similarity of a CCTV camera, and appears filtered in grey. With a maximum limit of 4 faces being detected, as users walk by, it blurs out their face. Below is the code for my program:

//import libraries
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

//PImages to store captured faces
PImage face0 = createImage(0, 0, RGB);
PImage face1 = createImage(0, 0, RGB);
PImage face2 = createImage(0, 0, RGB);
PImage face3 = createImage(0, 0, RGB);


void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 20);
 opencv = new OpenCV(this, 640, 480);
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); 
 frameRate(3);
 video.start();
 smooth();
}

void draw() {
 //need to scale up to make the image bigger at some point
 scale(1);

 //display video feed
 opencv.loadImage(video);
 image(video, 0, 0 );
 filter(GRAY);

 //array to store the faces
 Rectangle[] faces = opencv.detect();
 println(faces.length);

 
 if (faces.length >= 1) {
 face0 = get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
 face0.filter(BLUR, 6);
 image(face0, faces[0].x, faces[0].y);
 if (faces.length >= 2) {
 face1 = get(faces[1].x, faces[1].y, faces[1].width, faces[1].height);
 face1.filter(BLUR, 6);
 image(face1, faces[1].x, faces[1].y);
 if (faces.length >= 3) {
 face2 = get(faces[2].x, faces[2].y, faces[2].width, faces[2].height);
 face2.filter(BLUR, 6);
 image(face2, faces[2].x, faces[2].y);
 if (faces.length >= 4) {
 face3 = get(faces[3].x, faces[3].y, faces[3].width, faces[3].height);
 face3.filter(BLUR, 6);
 image(face3, faces[3].x, faces[3].y);
 }
 }
 }
 }
} // close draw


void captureEvent(Capture c) {
 c.read();
}

The problem I had last time with the filter blur covering the whole screen instead of just the rectangle placed around the detected face has been resolved. What was wrong with my code to start that I had originally wasn’t able to redraw the face that had been recognised because it didn’t have anything that commanded it to do so. With the new code, the ‘get’ functions that are applied to every ‘if’ statement allowed the images stored in the PImage arrays to redraw the image back into the space with the blur effect added to it. The code I had previously was able to apply a masked image from the PImage in the void setup to be applied to the facial detection space but wasn’t able to apply the blur. With a point in the right direction by another student, I was able to understand the nature of how this newly discovered code would help me in achieving my idea. I now have a clear understanding of how this kind of function works within processing.

References:

http://www.kurtstubbings.com/blog/

The Space | Weymouth House foyer

Today I have been analysing the space in which my interactive piece will be displayed. To start with, I wanted to look at the different areas the space had to offer. In terms of where the screens are placed, I had 3 choices. I could either set up my program in the seated area in the right side of the foyer, the north entrance to the right (the entrance side with the TARDIS and the monsters inc. model) or small studio on the left, past the entrance. Because my work uses a CCTV-like interaction, I thought it was best to display mine on the north entrance display.

I did a 15 minute survey on how many people walk through this area from 12:45 til 1pm. The amount of people that walked through this specific space in this period time was 35. I managed to record this at the same time that other students were displaying their work in the same area that I would be displaying mine. I managed to tally up 20 people that gave a quick interaction in reaction to seeing the work in this 15 minute period. This proved that the space that I was going to be using had a healthy amount of people in it and would probably get the most use out of my interactive piece. In my next post I will be talking about how users interact with my piece of work in this space.

Banksy | A deeper insight and meaning

The story of street artist Banksy has always been quite a hazy one to say the least. Nobody really knows who he is, his art is displayed illegally all over the world and his legacy has laid in the minds of most of young to middle age people since the 90’s. In many eyes he is seen as more of an inspiration than a criminal because of the meanings behind his art. His art usual points towards the political matter and war aspects of the modern world. The image below stenciled by Banksy symbolizes the outrageous hunt for fossil fuels and the greed we as humans have, that is slowly but surely destroying our planet:

727010

Image taken by Mark Bull.

Now onto where this becomes relative to my theories and ideas based on my interactive piece. Looking at the infamous piece of art by Banksy ‘One Nation Under CCTV’, I began to get a clearer understanding of why his art became so  iconic in the first place. His art leaves people questioning their own identity with just a simple piece of imagery as they walk by. My interactive media piece involves the same kind of theory, as it allows  people to walk by and have a glimpse at what the piece is about or what it means to them. I can also link my interactive piece to the fact that once you placed on a computer or online, you can have your identity stripped from you. The image below is the ‘One Nation Under CCTV’ piece by Banksy.

Banksy2CCTVSPL_468x443

In an article, Banksy stated “You don’t have to go to college, drag ’round a portfolio, mail off transparencies to snooty galleries or sleep with someone powerful, all you need now is a few ideas and a broadband connection”. This basically means that in the eyes of the artist himself, anyone can become an artist or get somewhere quite fast with the power of the internet.  With this statement in mind, I began thinking about some of the online works and individuals I’d seen over the years. People like PewDiePie, a Youtuber who became one of the highest earners on the site, topping 33 million subscribers. Some famous Youtubers just blog their entire lives, from the moment they wake to when they go to sleep, yet they still gain a very strong and positive perception in the public eye. It makes me ask the question as to how much it actually takes to become ‘someone’ in the real world, or whether the differentiation between reality and virtual reality cannot be distinguished.

In conclusion to this information, I can safely say that Banksy’s work is heavily based on the publics identity, and is placed mainly to make people question what they see as perhaps normal or what they are made to believe is normal behavior. His perception in the public eye is very ironic, seeing as he has never been given an identity himself. Maybe this is his idea of another form of his unique fashioned artwork, nobody knows.

Reference List:

Baudrillard, J (2014). The Spirit of Terrorism. Paris: Verso Books. p30-70.

http://www.dailymail.co.uk/news/article-559547/Graffiti-artist-Banksy-pulls-audacious-stunt-date–despite-watched-CCTV.html

http://www.goodreads.com/review/show/301340979

http://www.smithsonianmag.com/arts-culture/the-story-behind-banksy-4310304/#lCU0iQz6DtAR0SJO.99

http://socialblade.com/youtube/user/pewdiepie