You need to be a member of dance-tech to add comments!

Join dance-tech

Comments

  • Hello Leon,
    yes the interview is short...
    So, the info to light question sis important. I understand that the position and shape of the dancer is taken by an infrared camera in a space flooded with infrared light. on computer running Kalipso.
    http://www.frieder-weiss.de/eyecon/download.html
    This is avideo tracking system and in the same computer also generates a /graphic video output. So, it is real-time processing: the video signal of the camera is annalyzed and algorithmically manipulated, so the "blob" (the dancer) and its position ( among another parameters) are used as the foundation for remapping those parameters with back and white graphics.
    The result is projected in alignment with the camera. so it correspond one to one. The light resultant is a playful and fairly simple mapping and generative graphics in black and white that have very strong contingency with the dancer actions. The "light" in this case is totally generated by one computer, as any digital output is light.
    So, the short answer is that is from video signal to the output are a bunch of algorithms.
    Besides the aesthetic values of this work, it is the interfacing set up that is so effective: the computer does not get into feedback with its own generated graphics, This is accomplished by orchestrating the infrared cam, the the infrared lights, the projector (not emitting infrared light) and infrared pass filters. The rest is mapping .
    I hope that this help...
    Ah
    there is this very clear computer vision reference by Golan Levin
    http://www.interactiondesign-lab.com/progetti/media/ws_videotrackin....
  • doesn't really say much about how it translates the kalypso tracking info into light...
This reply was deleted.

Blog Topics by Tags

Monthly Archives