Skill Builder: Router

This week, I practiced using the manual router and router table. I found a circular cutout of plywood, and followed the edge with a rounded finishing bit.

IMG_8221.jpeg
IMG_8222.jpeg
IMG_8223.jpeg

I did a few quick passes on the routing table with a standard end mill on the back side of the circular piece. I did not quite get a straight line because the circular piece had no flat edge to glide along the table’s fence.

IMG_8227.jpeg

Serendipitous Imagery

This week, we were tasked with collecting visual scenes that exemplify textures, colors, shapes rather than focusing on narrative or linear storytelling. I oftentimes record video portraits when a particular scene captivates me.

Here is a link to a shared album of selected videos, with some of my highlights uploaded below.

A temperate drizzle breaks apart the cloudy masked reflection, with sleepy koi drifting into branches

Afternoon light seeping through the window of my father’s friend’s bathroom.

A mushroom cloud forming above Mount Fuji’s peak, a sign of impending rain for the week.

Dried flora within the clouds of Hakone.

Game Client: Plushy Controller

This week in Connected Devices, we were tasked to create a hardware client for the ball drop game. I decided to “hack” a stuffed animal my partner had made me, with its limbs and nose as the controller buttons.

IMG_8197.jpg

Bill of Materials

  • (1/2) yard of fuzzy yellow fabric - $45/yard. Purchased in Garment District

  • (5) large push buttons - $12.10 for 5. Amazon.

  • (5) white LEDs - provided by ITP.

  • Arduino Nano IOT 33

  • Micro USB cable

  • Solid core wire

Code

For the code, I had referenced this thread as a refresher for arrays and state changes.
Here is the code posted on GitHub.

Documentation

Here are the controller buttons on the breadboard, each illuminating their corresponding LED. The buttons are controlling the Left, Up, Right, Down functions of the ball game.

IMG_0463.JPG

Here is a time-lapse of me soldering the buttons and LEDs:

Here is a video demonstrating the LEDs that turn on when the button is pressed.

Here is the video of the plushy controller being used to play the Ball Drop Game:

p5.js: Final Visualization

Throughout the course of the semester, I came up with a few projects that I felt could complement each other. My final has become a compilation of the following:

breathing.gif
cubearray.gif
birdparadise.gif

A very inspirational website that has helped me generate new ideas is: http://www.generative-gestaltung.de/2/

When exploring 3D space, Allison had recommended I look into the EasyCam library: https://github.com/diwi/p5.EasyCam
After navigating through a number of examples, I wanted to try navigating through a 3D space with PoseNet.
Here’s a sketch of my exploration: https://editor.p5js.org/niccab/sketches/v4ralRUJ2

posenet.gif


When talking to people about my project and asking around if anyone had any experience with libraries / 3D environments, Jiaxin showed me his final project. He created a 3D environment from SCRATCH without knowing that there was a camera function in WEBGL. In his final project, he calculates the angle in which the wall planes should be positioned that are dependent on face detection via ml5.js. While his project is a different approach than mine, it gave me so much insight on the math involved to simulate a 3D environment in a 2D way - I recommend checking out his very detailed blog post here: https://wp.nyu.edu/jiaxinxu/2019/12/11/icm-media-final-the-tunnel/



Alone Together: Takeaways

In collaboration with Emily Zhao & Sue Roh

alone_together.png

Initial Plans

We were planning on sitting in isolation for eight hours with little to no structure to heighten the discomfort from isolation that we were collectively feeling. In order to make the experience more interactive, we were going to create a public URL that friends and strangers could visit to view a live stream of each of our rooms and send messages that we could view later.

Nick’s Feedback

  • Repetition or a series of rituals that give a structure to the time that otherwise can feel empty. 

  • How might you accentuate the sharing? Are there elements that can repeat across all three of your performances

  • What will be the exit of the performance?

    After reading Nick’s feedback, we realized the necessity of hammering out the goals of the experience. We agreed that we were trying to cope with loneliness rather than heighten its discomfort, which the current structure of the performance was likely not going to achieve.

Reframing experience for mindfulness and meditation

  • We decided that different mindful practices would help us attain peace with our loneliness. We did away with the live stream aspect of the performance, since we felt that it would insinuate surveillance, which would inevitably alter our behavior and detract from the goals we were trying to accomplish.

  • We moved away from the idea of trying to “prove something.” We ended up focusing more on acceptance of our current state.

Goals

  • Embracing loneliness rather than fearing it

  • Developing better methods of assuaging negative thought loops and detrimental behavioral patterns

  • Assessing our relationship with time and how we can better schedule it to allow for intentional mindfulness

  • To strip away technology and to take a break from external stresses

  • Loneliness itself is an uncomfortable feeling, but it really exacerbates depression and anxiety. The surest way to relieve anxiety and depression is to be mindful and present. We wanted not to make ourselves feel worse, but to actually find ways to better cope alone. 

Schedule & Techniques

We spoke with some of our peers about their mediation experiences and practices. Aidan told us about his retreat, recalling his 30-minute breathing, 30-minute walking meditation pattern. This inspired our more scheduled approach to the 4 hours. 

Together, drawing on our own personal experiences, we also included mantra, symbolic rituals, movement meditation, and self-love exercises

Allowance of 3 Bathroom Breaks
Sue: on the hour
Nicole: on the :15
Emily: on the :30

Mantra: I am more than enough. 

8:00 - 8:30 Settling in: With pen and paper, writing our intentions
8:30–9:00 Gratitude: People, opportunities, essentials, etc… 
9:00–9:30 Walking meditation
9:30–10:30 Object/theme: String
10:30–11:30 Mantra/writing or Bodywork : Massaging, body love
11:30 - 12:00 Breathing meditation
12:00 - 1:00 Debrief

Materials

  • Each room had the following:

    • GoPro to record a timelapse at 5 second intervals

    • Laptop with webcam to record full length video

    • Chair

    • Table

    • Blackout curtain for privacy

Documentation: Timelapse

Debrief: Final Thoughts and Reactions VS How We Thought We Would Feel 

  • We all agreed that the experience was much easier than we anticipated, demonstrating that our fear of loneliness is often worse than the experience of it. It was still difficult, but instead of feeling tortured, we were able to create a positive and peaceful experience. Afterwards, we were all in a state of calm. 

  • Our breakups prompted us to formulate this experience, but we only felt gratitude for the various people and things that existed in our lives.

Questions / Further Development

  • How would the experience be different if we were in relationships and not feeling lonely? 

  • What if this experience happened immediately after our breakups, without sufficient time to process our emotions?

  • Would it have been harder if it took place in the morning when we weren’t so exhausted? 

  • How would our behaviors have changed if we incorporated the audience in an interactive way? 

  • How different would it be if the room was soundproofed? (We observed voices of other students and the humming of the laptop fans)

  • Did the presence of a laptop/camera alter our behavior?

  • How would the experience change if we were not in a school setting?

Monument: I Suffered Alone

Objective

Construct an interactive memorial or monument that reflects an issue in society you feel is overlooked, unresolved, or neglected. Your design should use the conventions of memorials and monuments, have a strong invitation, but also challenge your audience.

Concept

Emily and I began our discussion by reflecting on our individual experiences - what angers us, what we feel is overlooked, what we would like people to be more aware of. Emily opened up to me about her very personal experience being bulimic for a year, and how alone she felt. Even after telling her close friends about what she was going through, they did not check in with the frequency that Emily may have hoped for, making her feel worse with the lack of support from loved ones.

We wanted to bring awareness that there are people who are suffering from eating disorders, but it is an individual experience that people have alone.

Process

We purchased one male and one female outfit which both consisted of a black hoodie and a pair of black pants. We filled up the clothing with scrap material so that it would take the shape of a body. We edited vomiting sounds, uploaded it to an iPod nano and connected it to a speaker so that we could install it with the stuffed form. Our intention was to install a form in both a male and female bathroom stall.

IMG_5561.JPG
IMG_5573.JPG

Afterthought

After actualizing this monument, I felt deeply saddened, helpless, troubled - if this is an issue that so many people hide behind closed doors, what can I do to help? How can I make a change? How can we change how society challenges the way we perceive our self image? Emily described that the issue can’t be reduced to perceived body image, but that there is an element of control that empowers the person to induce purging.

Final Project: Alone Together

In collaboration with Emily Zhao & Sue Roh

alone_together.png

ALONE TOGETHER

Having all recently broken up with our long-term live-in partners, the three of us have found singledom to be extremely uncomfortable and sad. At one point or another, we have all resorted to serial dating, drugs, and copious amounts of masturbation. However, all of these coping mechanisms merely distract from the core problem: not knowing how to be alone. 

Who are we now without our partners? What do we do with our time? How do you fill the void? Is it happiness if it’s not shared?

We want to push ourselves to the most drastic form of loneliness we can imagine – eight hours in a room without any stimulation. We will allow ourselves three five-minute bathroom breaks. The entire experience will be individually recorded and broadcast on the web for people to see. 

Ideally, the three of us will be close in proximity and experience the solitude in the same time frame. No one is truly alone in their loneliness.  

Will we improve our ability to be alone after the experience? Can we endure eight solitary hours? What will we do to occupy the time? We only have questions. 

As much as we don’t want to do this, we hope we can glean something from the experience. We know the power of discomfort, and there’s so much to learn from risk and vulnerability. 

p5js: Sound

I wanted to use PoseNet to control various bird sounds. Playing around with Google's bird sound project inspired me, and I would like to create a similar interface (but would need to purchase the library of sounds) and then use PoseNet to trigger specific sounds in replace of the mousePress.

My original idea was that if you shimmy your shoulders that perhaps a sort of cacophonous flapping or symphony of birds would be triggered. With PoseNet, I felt the tracking was unsteady at times, and the amount of shimmy-ing would need to be over-exaggerated. I decided that the way you move your entire body should control the rate of various bird sounds. In this way, the user’s approach to the camera becomes a dance in itself, like how the Bird of Paradise attracts its mate.

The four points that trigger sounds are:

Left Hand: Riflebird

Left Hand: Riflebird

Right Hand: Parotia

Right Hand: Parotia

Left Shoulder: Nightingale

Left Shoulder: Nightingale

Right Shoulder: Raven

Right Shoulder: Raven

bopbop.gif

I mapped each bird’s song to the PoseNet’s keypoint’s Y position. Between the hand positions and shoulder positions, I found myself curling and expanding my body in such a way to alter the way the sounds layered and interacted with each other.

Before the PoseNet keypoints are picked up by the camera, the preloaded GIFs show up on the top of the canvas. My question for this week: Is there a way to hide these GIFs altogether before they align on the canvas?

Here’s a link to the sketch.

Week 1: Ethics of Discomfort

Last year, I remember a friend of mine had told me to check out the new immersive VR experience by Alejandro González Iñárritu titled Carne y Arena, a simulation of what refugees experience at the US - Mexican border when being apprehended by border patrol. In retrospect, I feel embarrassed now that my initial thoughts were something along the lines of “Wow! He directed Birdman, it must be really good. Whoa, you put on a VR set and wear a backpack and are ushered into a room of sand. This is going to be an experience.” I admit to being at first curious about the production quality, the setup of the immersive experience, the VR technology that drives it, the cinematic visuals that accompany the narrative - before settling into the idea that I’d be entering someone else’s nightmare.

A past professor of mine started a discussion around the term “poverty porn” and how there are several VR experiences that give privileged audiences a momentary slice of another’s exploited living conditions. While it can be profoundly effective in changing people’s perspectives and disposition after, I feel that there should be mandatory action that directly helps the subject of the VR experience.

This also reminded me of the episode of Hidden Brain titled “You 2.0: The Empathy Gym” Here, Jamil Zaki speaks with host Shankar Vendatam:

"ZAKI: I think a lot of us have this experience when we see, for instance, a homeless individual on the sidewalk ahead of us. I've heard of people who cross the street to avoid that encounter maybe because they don't want to sort of see that person's suffering close up because it will make them feel sad or guilty or both.

VEDANTAM: There's some irony there, isn't there, which is that the person who is likely to actually be more empathic is also the person who's likely to cross the street because they recognize that the empathy that they have inside them is going to make them feel bad.”

Limited Grains

Aside from water and air, sand is the most used natural resource, and we’re starting to run out.

What? Running out? How could that be when it’s actually the most abundant thing on Earth? Not all grains of sand are created equal, and the type of sand we need for our building materials i.e. concrete, glass, or silica are harvested from the bottoms of rivers or on beaches. The way in which we mine this sand has detrimental environmental impacts similar to oil fracking, where we are digging deeper and deeper into our limited resources and transporting tons and tons of it over obscene distances, destroying our planet at an extraordinary cost.

The question isn’t really what’s going to happen when we run out of sand. The question is what’s going to happen when we run out of everything ... We know that we’re using too much freshwater. We know that we’re cutting down too many trees. We know we’re harvesting too many fish out of the oceans. We know we’re burning too much fossil fuel. And now come to find out we’re using too much sand. Well to my mind, these are not separate problems. They’re all symptoms of the same problem which is that we’re consuming too much, right? The way that we live here in the Western World and that lifestyle that we’ve now exported to the rest of the world, it just consumes way too many natural resources. And the planet simply can’t sustain it.
— Vince Beiser, Podcast: 99% Invisible - Build on Sand

I want to bring light to this issue - I don’t think it’s common knowledge that our supplies of usable sand are dwindling, that there’s a black market for it, that people’s lives are in danger just to get to the sand we have left. The demand for sand is driven by this Western consumption and idealization of living large and luxurious with little regard of how it impacts others and the world we live in.

I want the viewer to think about how quickly giant cities are being built overnight, and that we simply don’t need these large mansions or luxury hotels. I want to remind everyone that living large comes at an expense. We need to live smaller, we need to use less resources, we need to restructure the way we live.


Inspiration

I would not have known about the scarcity of sand without the enlightening 361st episode of 99% Invisible: Built on Sand. Here is a link to the podcast transcript, which you can listen to below. I am currently reading Vince Beiser’s book The World in a Grain so that I can learn more about the environmental impact that sand mining has.

Over the years, I’ve thought about and examined how light travels through and is reflected by glass. I am now left feeling indulgent in this exploration after knowing how the glass industry is another contributing factor to our depleting resources. Glass, nonetheless, has mesmerizing reflective properties and we can at least be conscious and thank sand for that.

In 2014, my friend Camille showcased her mixed-media painting which incorporated textiles and what I thought was 3M reflective paint. She explained that the canvas was covered with white painted numbers that had a layer of reflective glass beads, the material that is found in the road markings and street signs.

A Reflection of Self, Camille Reyes, 2014

A Reflection of Self, Camille Reyes, 2014

I came across this material again when I saw Mary Corse’s solo exhibition at the Whitney. I’m interested in her approach, and how she is using the glass beads to create a perceptual, inner experience instead of an objective reality.

I’m also inspired by Aki Sasamoto and her spinning glass sculptures; the movement, the sound, the aesthetic of a vessel within a vessel. “When everything is lined up, it starts to have its own logic and I have no control over it. That’s another way for me to be dominated by objects. They start telling its own story.”


Process

For Limited Grains, I am fabricating a rectangular vessel in which two of the sides will be clear acrylic, and the other two will be white acrylic diffusers with an LED light array embedded behind it so that it illuminates the interior chamber. The light will create the white surface that the reflective glass beads will need to showcase their properties. At the bottom of the vessel, there will be a fan to create a whirlwind of bouncing glass beads.

The LED array will be composed of 8x 16” long WS2812B strips connected to a BlinkinTape control board. I will use madMapper to program the light animation to suggest a passing of time, with either a sweeping left to right motion, a radial sweep from the center, or a video feed of an hourglass timelapse (too on the nose?) I have purchased a miniMad controller which can save madMapper presets, and will upload the animations to it so the piece can stand alone.

As I will be traveling to the Sahara, I plan on bringing back the very sand that is so abundant but unusable for our modern civilization. I will create a secondary chamber that partially envelopes the glass bead vessel. I hope to create this juxtaposition where this outer chamber containing stagnant Saharan sand blocks the visible light, emphasizing the ephemeral experience within.

IMG_4153.jpg

Here is a study for the interior chamber’s glass bead whirlwind.

IMG_3506.jpg
IMG_3507.jpg

Rain Dance

Wouldn’t you like to dance in the rain without getting wet? Getting caught in the rain can be refreshing in the summer, but listen, it’s October now and everyone is getting the sniffles, myself included. Introducing… Rain Dance.

CONCEPT
Rain Dance is an interactive installation that will project a raining scene with the silhouette of the viewer in the middle. This body tracking silhouette will have superimposed randomized GIFs of dancing feet that will become activated once the user is in front of the camera in the correct location and stepping on load sensors that are connected to an Arduino. A 30” circular pool of water will be placed directly underneath the projected image of the feet. Sound exciters will be attached to the bottom of the pool, and when a user is moving their feet, a sound will be triggered causing the water to vibrate.

IMG_3975.jpg

VISUALIZATION PROTOTYPE: p5.js

I wanted to create the raining scene with p5.js using a randomized array of raindrops, but how can I tackle the randomized GIF of dancing feet that would splash around this scene? The idea is that the load sensor would detect a set minimum weight and trigger a GIF to appear. I envisioned pulling an image URL from an API, but I had difficulty with the getting the syntax just right.

I followed Shiffman’s Working with APIs in Javascript tutorial, and I was successful in calling specific parameters to point to the correct GIF URL.After some more perusing, I found another relevant Shiffman tutorial: The Giphy API and Javascript. Here is a link to my sketch following the tutorial with the GIF correctly loading. In the example however, Shiffman populates the GIFs into a list, and does not confine it within a canvas. If I create a canvas, the GIF appears outside of the canvas. I tried the push, pop, and translate functions within the function where I createImg() to see if I can shift the GIF to where I want it, and did not find any success.

I decided I would create an array of saved GIFs. I thought it might be as straightforward as loading images, but I quickly realized that this was not the case. Branching off the rotating cat example, I replaced the kitty with a GIF and saw that the positioning of the GIF was static and would not generate a GIF when I clicked the mouse. What is the best way to create an array of GIFs?

I came across Lauren McCarthy’s now closed request for adding GIF support to the image() function and also this thread in StackOverflow. . The following libraries handle GIFs:

My sketch using p5.gif.js did not work - I received these errors: Uncaught ReferenceError: loadGif is not defined (: line 12), Uncaught TypeError: Cannot read property 'loaded' of undefined (: line 21)
My sketch using p5.gif worked, but it took forever to load a very simple GIF, and I still did not understand how to shift where the GIF would be positioned.
The p5.createLoop looked more like it focused on creating GIFs within the p5 environment, and that “add draw option to stay in sync with GIF loop” is still on the to-do list.

Interacting with the Arduino

Now that I have a p5js sketch to work with, I can use an Arduino to communicate with the sketch via Serial. Using load sensors, I plan on using two load cells that will measure the weight of the left foot, and the other two for the right foot. When a user leans left or right on the load cells, they would change the X position of the GIF at a mapped value. Because I am still waiting for my load cell amplifier that will combine the readings from the four load cells, I decided to use a potentiometer in its place. The potentiometer is mapped to the width of the canvas so that the GIF can be shifted when its values are changed.

raindancegif.gif
61T98cbhG6L._AC_SL1001_.jpg
Wheatstone_Bridge_Load_Sensor_bb_Fritzing.jpg

p5js: Functions

This week, I wanted to consolidate the code of the project I worked on with Dawn for week 3’s assignment of algorithmic design.

giphy1.gif

Quite frankly, the code itself is pretty simple and already makes use of inherent functions, specifically createSlider(). I made an attempt to create a function that would encapsulate the naming of the slider, assigning the min, max, value, step within the createSlider(), and assigning the slider’s position. I broke a few things in the process, and ultimately concluded that creating a function to use a function felt redundant.

p5jsW5.PNG
p5jsW52.PNG

I created a function specifically for the GUI’s sections. This simplified creating the color of the rectangle, positioning the rectangle, adding the title, and positioning the title.

p5jsW53.PNG

Exploring Sound with Arduino

IMG_1347.jpg

I was able to get a hold of multiple discarded speakers from the back of a television and thought “Well hey, I know how to put these to good use.”

The left and right speakers each have 4 pins that are connected to a single 8 pin connector. It’s easy to identify which wires go to which since they are grouped together and color coded. I wanted to test both of their functionality.

I decided to upload Fur Elise to my Arduino using the code posted here .

If I wanted one note to play on one speaker and then switch to the next speaker, I would have to manually write out each note. Just as a demonstration, I manually switched the hook-up wire back and forth in the same pin on the Arduino.

Here is a video demonstrating the functionality of both speakers.


p5.js: Repetition

This week, I wanted to continue my explorations within the 3D realm and create an array of cubes that would rotate when the mouse is moved. Here is the link to the sketch.

Prior to starting my sketch, I sought out inspiration from the book Generative Design and navigated through the examples of movement within a grid format. I came across this example and saw how they used multiple .SVG files as options of what populated the grid. At first, I attempted to add my own .SVG file and see how I could manipulate it there, but I could not figure out why my image would not load, even when uploaded into the correct file structure. Regardless, I knew I did not want to stick within the confines of a 2D image and explored repetition with the 3D box shape.




 

Just as I did in the last assignment, I had difficulty understanding how the camera() exactly works, and thought it might be helpful to replace the parameters with mouseX. Here is a video of how the sketch changes when each parameter is changed to mouseX.

repetition04.gif
Efp1DwAAQBAJ.jpeg

I was confused why it seemed like the final cube at the lower right near the center point looked as if it were getting sucked into a vortex. I thought this was merely due to the camera angle, hence the camera() exploration, but I did not find a direct way to push this array “against a wall” or along the same depth plane.

I then started to add control to the way the Z axis translated across the sketch. I ended up creating this gradual sloped look as if the stack of cubes had legs that were starting to give out.

I also included the functionality to increase the amount of tiled cubes there would be by pressing the up arrow.

After this study, I am looking forward to using orbitControl() and perspective() to put that on my tool belt for 3D space exploration.

Here is a playlist that demonstrates the meandering road I followed.