p5.js: Final Visualization

Throughout the course of the semester, I came up with a few projects that I felt could complement each other. My final has become a compilation of the following:

breathing.gif
cubearray.gif
birdparadise.gif

A very inspirational website that has helped me generate new ideas is: http://www.generative-gestaltung.de/2/

When exploring 3D space, Allison had recommended I look into the EasyCam library: https://github.com/diwi/p5.EasyCam
After navigating through a number of examples, I wanted to try navigating through a 3D space with PoseNet.
Here’s a sketch of my exploration: https://editor.p5js.org/niccab/sketches/v4ralRUJ2

posenet.gif


When talking to people about my project and asking around if anyone had any experience with libraries / 3D environments, Jiaxin showed me his final project. He created a 3D environment from SCRATCH without knowing that there was a camera function in WEBGL. In his final project, he calculates the angle in which the wall planes should be positioned that are dependent on face detection via ml5.js. While his project is a different approach than mine, it gave me so much insight on the math involved to simulate a 3D environment in a 2D way - I recommend checking out his very detailed blog post here: https://wp.nyu.edu/jiaxinxu/2019/12/11/icm-media-final-the-tunnel/



p5js: Sound

I wanted to use PoseNet to control various bird sounds. Playing around with Google's bird sound project inspired me, and I would like to create a similar interface (but would need to purchase the library of sounds) and then use PoseNet to trigger specific sounds in replace of the mousePress.

My original idea was that if you shimmy your shoulders that perhaps a sort of cacophonous flapping or symphony of birds would be triggered. With PoseNet, I felt the tracking was unsteady at times, and the amount of shimmy-ing would need to be over-exaggerated. I decided that the way you move your entire body should control the rate of various bird sounds. In this way, the user’s approach to the camera becomes a dance in itself, like how the Bird of Paradise attracts its mate.

The four points that trigger sounds are:

Left Hand: Riflebird

Left Hand: Riflebird

Right Hand: Parotia

Right Hand: Parotia

Left Shoulder: Nightingale

Left Shoulder: Nightingale

Right Shoulder: Raven

Right Shoulder: Raven

bopbop.gif

I mapped each bird’s song to the PoseNet’s keypoint’s Y position. Between the hand positions and shoulder positions, I found myself curling and expanding my body in such a way to alter the way the sounds layered and interacted with each other.

Before the PoseNet keypoints are picked up by the camera, the preloaded GIFs show up on the top of the canvas. My question for this week: Is there a way to hide these GIFs altogether before they align on the canvas?

Here’s a link to the sketch.

Rain Dance

Wouldn’t you like to dance in the rain without getting wet? Getting caught in the rain can be refreshing in the summer, but listen, it’s October now and everyone is getting the sniffles, myself included. Introducing… Rain Dance.

CONCEPT
Rain Dance is an interactive installation that will project a raining scene with the silhouette of the viewer in the middle. This body tracking silhouette will have superimposed randomized GIFs of dancing feet that will become activated once the user is in front of the camera in the correct location and stepping on load sensors that are connected to an Arduino. A 30” circular pool of water will be placed directly underneath the projected image of the feet. Sound exciters will be attached to the bottom of the pool, and when a user is moving their feet, a sound will be triggered causing the water to vibrate.

IMG_3975.jpg

VISUALIZATION PROTOTYPE: p5.js

I wanted to create the raining scene with p5.js using a randomized array of raindrops, but how can I tackle the randomized GIF of dancing feet that would splash around this scene? The idea is that the load sensor would detect a set minimum weight and trigger a GIF to appear. I envisioned pulling an image URL from an API, but I had difficulty with the getting the syntax just right.

I followed Shiffman’s Working with APIs in Javascript tutorial, and I was successful in calling specific parameters to point to the correct GIF URL.After some more perusing, I found another relevant Shiffman tutorial: The Giphy API and Javascript. Here is a link to my sketch following the tutorial with the GIF correctly loading. In the example however, Shiffman populates the GIFs into a list, and does not confine it within a canvas. If I create a canvas, the GIF appears outside of the canvas. I tried the push, pop, and translate functions within the function where I createImg() to see if I can shift the GIF to where I want it, and did not find any success.

I decided I would create an array of saved GIFs. I thought it might be as straightforward as loading images, but I quickly realized that this was not the case. Branching off the rotating cat example, I replaced the kitty with a GIF and saw that the positioning of the GIF was static and would not generate a GIF when I clicked the mouse. What is the best way to create an array of GIFs?

I came across Lauren McCarthy’s now closed request for adding GIF support to the image() function and also this thread in StackOverflow. . The following libraries handle GIFs:

My sketch using p5.gif.js did not work - I received these errors: Uncaught ReferenceError: loadGif is not defined (: line 12), Uncaught TypeError: Cannot read property 'loaded' of undefined (: line 21)
My sketch using p5.gif worked, but it took forever to load a very simple GIF, and I still did not understand how to shift where the GIF would be positioned.
The p5.createLoop looked more like it focused on creating GIFs within the p5 environment, and that “add draw option to stay in sync with GIF loop” is still on the to-do list.

Interacting with the Arduino

Now that I have a p5js sketch to work with, I can use an Arduino to communicate with the sketch via Serial. Using load sensors, I plan on using two load cells that will measure the weight of the left foot, and the other two for the right foot. When a user leans left or right on the load cells, they would change the X position of the GIF at a mapped value. Because I am still waiting for my load cell amplifier that will combine the readings from the four load cells, I decided to use a potentiometer in its place. The potentiometer is mapped to the width of the canvas so that the GIF can be shifted when its values are changed.

raindancegif.gif
61T98cbhG6L._AC_SL1001_.jpg
Wheatstone_Bridge_Load_Sensor_bb_Fritzing.jpg

p5js: Functions

This week, I wanted to consolidate the code of the project I worked on with Dawn for week 3’s assignment of algorithmic design.

giphy1.gif

Quite frankly, the code itself is pretty simple and already makes use of inherent functions, specifically createSlider(). I made an attempt to create a function that would encapsulate the naming of the slider, assigning the min, max, value, step within the createSlider(), and assigning the slider’s position. I broke a few things in the process, and ultimately concluded that creating a function to use a function felt redundant.

p5jsW5.PNG
p5jsW52.PNG

I created a function specifically for the GUI’s sections. This simplified creating the color of the rectangle, positioning the rectangle, adding the title, and positioning the title.

p5jsW53.PNG

p5.js: Repetition

This week, I wanted to continue my explorations within the 3D realm and create an array of cubes that would rotate when the mouse is moved. Here is the link to the sketch.

Prior to starting my sketch, I sought out inspiration from the book Generative Design and navigated through the examples of movement within a grid format. I came across this example and saw how they used multiple .SVG files as options of what populated the grid. At first, I attempted to add my own .SVG file and see how I could manipulate it there, but I could not figure out why my image would not load, even when uploaded into the correct file structure. Regardless, I knew I did not want to stick within the confines of a 2D image and explored repetition with the 3D box shape.




 

Just as I did in the last assignment, I had difficulty understanding how the camera() exactly works, and thought it might be helpful to replace the parameters with mouseX. Here is a video of how the sketch changes when each parameter is changed to mouseX.

repetition04.gif
Efp1DwAAQBAJ.jpeg

I was confused why it seemed like the final cube at the lower right near the center point looked as if it were getting sucked into a vortex. I thought this was merely due to the camera angle, hence the camera() exploration, but I did not find a direct way to push this array “against a wall” or along the same depth plane.

I then started to add control to the way the Z axis translated across the sketch. I ended up creating this gradual sloped look as if the stack of cubes had legs that were starting to give out.

I also included the functionality to increase the amount of tiled cubes there would be by pressing the up arrow.

After this study, I am looking forward to using orbitControl() and perspective() to put that on my tool belt for 3D space exploration.

Here is a playlist that demonstrates the meandering road I followed.

p5.js: Algorithmic Design

Dawn and I partnered up to create a spiraling array of boxes with adjustable parameters. The box width, transparency, frame rate, and background color are the elements with values that exist within a range, controlled by a series of sliders.

p5js_algdes.gif


As a starting point, we looked through Kelly De Naples' sketches, which could be found here: https://editor.p5js.org/denaplesk2/sketches. We came across one in particular titled: Exercise 4: Functions_Circles, which could be found here: https://editor.p5js.org/denaplesk2/sketches/ryIwGqZsf.

To understand her algorithmic design, we explored each line of code step by step. We wanted to break down her exercise into digestable, individually controlled elements. We changed a whole slew of parameters, and turned her pattern grid of jostling circles into a 3D environment of cubes.

Screen Shot 2019-09-24 at 8.21.24 PM.png
Screen Shot 2019-09-24 at 8.14.32 PM.png

At first, we were confused why the boxes appeared stacked on top of another, especially when there were two for loops to create both rows and columns of the boxes. We quickly realized that within WEBGL, a push / pop / translate were required to ensure that each box would shift as the variable x and y increased within the for loop.

Another point of confusion was why the boxes were being drawn in the center of the canvas we defined. It is a result of the default camera angle. We had tried changing the values of different parameters, but instead we used the top portion to create our User Interface for adjustable sliders. I think for a future iteration, it would be interesting to create an additional slider that controls camera angles.

Screen Shot 2019-09-24 at 10.22.05 PM.png

p5.js: Animation, Variables

We are only in our third week of school, but the feeling of new stresses has largely accumulated. I began looking for inspiration in any moment I had a chance to breathe and step out of my thoughts. I sought out a humble escape across the street, and parked in one of those red lounge chairs in the middle of the MetroTech campus. With proper posture for that lounger, I became square with the overcast sky, quietly blanketed with a paper-thin leafy canopy.

I wanted to replicate the subtle breathe of these leaves that barely created shelter from an impending drizzle. I began the assignment by looking at two Recursive Tree examples:

I enjoyed the natural, unpredictable length of branches of the first example, and the interaction of growing branches of the second.

With uncertainty of how to create abstracted leaves and avoiding a direct figurative translation, I began exploring trunk widths through the expression of multiple cylinders. The width was mapped to the mouseX position.

Screen Shot 2019-09-14 at 2.16.46 AM.png
Screen Shot 2019-09-14 at 2.19.29 AM.png

I scrapped this visual and wanted to focus on the breath, not just of these greens but mostly of my own. I envisioned a multicolor mandala composed of layering shapes that would undulate as you exhale. Using the mic, your “breath” or voice controls the rotation of the shapes Z axis. The mapped breath value, mouseX and mouseY coordinates control the R, G, B values of the fill.

Click here to view the sketch.

p5.js: Screen Drawing

When assigned to create any sort of screen drawing with p5.js, I thought to pull inspiration from modern art. We reviewed the classic programming example of Sol LeWitt’s Wall Drawings, where a museum, gallery, or other space simply receives a set of instructions of how to create the artwork. I thought I would pick a random Sol LeWitt work, and I came across this prompt for the Boston Museum:

Random points on a canvas that are not evenly spaced.

Random points on a canvas that are not evenly spaced.

I started working on the code for this: https://editor.p5js.org/niccab/sketches/fmrRtWck1

I paused after making the random points and came up with the conclusion that this set of instructions might be better suited for Week 6’s class on arrays. My code only creates 50 random dots without any concern of where each dot is in relation to another, therefore, it would not be evenly spaced across the canvas. Furthermore, in order to connect the random, evenly placed points, I would need to store their coordinates in an array.

My question then might be, is there a way to randomly place dots that are evenly spaced? If the spacing is defined and each point has consideration for its neighbor, is the coordinate of the point truly random?

I thought a bit further about what we had discussed in class, and I remember that one of the example questions was “how do you use the arc function and define its parameters?” Still inspired by paintings, I wanted to replicate Hilma Af Klint’s Svanen.

p5.js replica of Svanen.

p5.js replica of Svanen.

Here is the p5.js sketch: https://editor.p5js.org/niccab/sketches/Fci2SMXje

When making this, I did get a bit confused using the arc function. Of course after a bit of trial and error, I figured out how to place the coordinates. Is there a particular reason why an arc starts to draw clockwise beginning east? Why not north? Is there a way to define the arc with degrees instead of radians?

I would like to add visible brush strokes onto the p5.js version of Svanen. What might be the best way to approach an organic brush stroke overlay that follows the confines of the shape that contains it?

What Computational Media Means To Me

My background is in Visual Arts, with a focus on Architectural Photography. After I received my Bachelor’s, I realized that I fell out of love with photography. I didn’t want to hide behind the lens anymore; instead, I wanted to create the compositions of light that I once simply studied and took pictures of.

In 2014, I decided to backpack throughout Europe and Japan with the sole purpose of documenting light festivals and created this side project dubbed Square.Sky. I attended Lumina in Cascais, Festival of Lights in Berlin, Glow in Eindhoven, Fête des Lumières in Lyon, and several Christmas spectaculars scattered throughout Japan. This wasn’t JUST light - I was exposed to a cornucopia of new media: projection mapping, interactive sensors, spatial sound systems, addressable lighting, and so much more.

I have been inspired by the likes of teamLab, rAndom International, Moment Factory, Nonotak, and so many more. I have worked with ITP Alumni CHiKA on her installation SEI05, Brooklyn Research as an intern to find New Forms of Interaction, and now Leo Villareal as a technical project manager. My passion is to learn how I can manipulate light and fabricate the structures to support those interactive systems.

I see computation as a means to communicate between software and hardware to facilitate human interaction.