![[Midterm] War Darts](https://i0.wp.com/www.itp.paulaceballos.com/wp-content/uploads/2016/03/6B4A1751-copy-1.jpg?fit=5760%2C3840)
Holy cheeseballs was this project tough.
We had a total of 3 weeks for this, and I think our biggest deterrent was the fact that we spent 1.5 weeks trying to nail down an idea that both Aaron and me liked. The good part is, working with him (again) is the best, so we knew it would be hard work, but it didn’t matter.
So, here’s the idea: War Darts
What is it? A commentary of the automisation and dehumanization of war through the visualization of untethered actions that leave behind a wake of consequences.
How does it work? A user will come up to a map on the wall and will throw darts at it – similar to a dart board. When the dart lands on the map, an animation will get triggered that will simulate an atomic bomb exploding with the help of visual and audio cues.
What about the technology? Using processing and a Kinect, we will calculate the position of the dart on the map, process that data, and in return overlay (with projection mapping) an animation over the dart that shows the ramifications of war actions.
Now, the process.
Once we locked down our idea, we needed to figure out how to make it work. I think Aaron and I sat and messed around with the Kinect and processing for a good 3-4 days with nothing working. We tried using the depth image, IR (infrared) image, registered image rgb image and the array of raw depth from the kinect. Nothing worked. We moved on and tried the blob scanner, the blob detection and the open cv libraries, none of that worked either. For a second, we tried a multi-Kinect setup but that just seemed suicidal. After that, we decided to give the kinect 1 a shot (we had been using the 2 up until now) and tried the whole slew of images just mentioned until the RGB image was the winner!
Once that was solved, we just needed to do the rest of the project in one night. Totally doable.
We needed to count the blobs, determine their location, and trigger an animation where their x and y is positioned. Miraculously, we got it done by 4am!
With that done, I ran home, slept 3 hours, changed and was back on the floor at 9am to finish the project and set it up for our class at 3:30pm. TIME FLEW BY.
Now the only thing we needed to get done was the projection mapping. I had gone to the Movement Collective’s workshop on projection mapping in order to prepare for this, but I just couldn’t get it to work. Aaron is familiar with Millumin so we decided to give that a shot. It didn’t work, we then decided to try madmapper, and lastly simple client to stream your actual screen on to the map. I’m not sure what the problem was but nothing was working.
Finally, after messing around with some options and using processing to syphon to syphoner to millumin it worked. 15 minutes before class. Now we just needed to resize and test everything to make sure it was working.
Unfortunately, it wasn’t working. We managed to resize it to somewhat the area we needed, but the sketch just wasn’t working. Defeated, we headed into class. We were pretty bummed. We spent so much time and effort trying to get this to work and at the last minute, we just couldn’t do it.
The first half of the class came and went and we went on break. We ran back to my computer (we had left everything outside the class) to see if we could fix it and, miraculously I might add, it was working! I think we may have overloaded my computer a bit and that’s why it wasn’t working before.
We ran, grabbed Gabe, grabbed a camera and the class and presented. It wasn’t working perfectly, and it definitely has LONG ways to go, but it worked!
And with that, we sleep.