COMPUTING WITH MEDIA

TIME : Fall 2015

Hands-on introduction to developing multimedia applications. Representation and perception of sound, images, and time. Media computing paradigms including OOP, callbacks, multithreading, OpenGL, distributed computing, algorithmic control, indeterminacy, real-time interactivity, and mapping data between sensory modalities. Students develop a series of audiovisual works (as C++ software) leading to a final project.

Course Website

EXPERIENCE

The course prepared me to work on a project I had been dreaming of developing for a long time, the Kaleidoscopic Geodesic Dome. But in the process of reaching till there, I learnt a plethora of new techniques and concepts. Audio and Image processing , was something I had dealt with majorly only theoretically but I had the chance to practically experiment with it as a part of this curriculum. Also , by the end of the quarter, I feel a definite improvement in my C++ coding skills and the ability of recognizing the appropriate work flow to realize a project.


PART 1

This assignment segregated an image into individual pixels and arranged them in different color spaces according to key pressed.

Key 1 – Arranges all the pixels at the home position, that is the original position of the of pixels in the original image.

Key 2 – Arranges the pixel in a RGB color cube (3D Color Space), wherein the x co-ordinates are linked with red component, y with green component and z with blue components.

Key 3 – Arranges the pixel in a HSV cyclinder (3D Color Space), wherein the Hue is the radial angle, Saturation S is the radius and the Value V is the z coordinate.

Key 4 – Arranges pixel in a 2D Color Space . X coordinates are given by the sum of red and green components and Y coordinates are given by blue components.


 

PART 2

This assignment takes in a mono audio as an input and performs granular synthesis on it. The complete audio waveform is divided into grains and each grain is then worked upon and arranged by utilising different techniques. In the following code four methods are implemented.

1. Grains are sorted as per RMS (Played by pressing ‘1’)

2. Grains are sortred as per the number of zero crossing.(Played by pressing ‘2’)

3. Smoothing of each grain is performed by multiplying each frame by a hamming window.


 

PART 3

VISUALIZING OF INPUT SOUND USING AN IMAGE : Z Co-ordinate of every pixel is the STFT of input audio at any time. The input audio is taken from the surroundings using the microphone.