Hello fellow dance-techers! This is my first official blog for the miditron wireless project I will be working on the next six months-ish. I hope to post more than once a month with small updates, images, and video – so keep an eye out for it. To start, I thought that I would give you an overview of the project and how it fits into a larger piece currently under construction. This first blog will be a bit lengthy, but I’m hoping to keep future blogs shorter with more pictures and videos.
MidiTron Wireless Control for a Life-Sized PuppetI am working with a team of designers to construct the costume of a life-sized puppet named Leonard. The wonderful people involved with the project are as follows…- Jordan Golding , professional costume builder/sculpture/visual artist- Christopher Martinez, composer/media artist/programmer/visual artist- Stjepan Rajko, programmer/dancer- Shawn Cook, programmer/media artist/sweet man who volunteered to wear the costumeThe Leonard costume will be equipped with sensors that 1. detect costume animation by the actor wearing the costume (Shawn), and 2. detect costume animation by outside actors and audience members who are interacting with the costume externally. Leonard will be modeled after the following hand puppet…
Blue = Sensors for actor/wearer manipulation
Red = Sensors for manipulation by external actors/audience members
Sensors!!!We plan use two types of sensors for Leonard.1. 3-axis accelerometer2. Fabric bend sensors (These are amazing, cheep, and homemade! If you are interested in making your own check out this instructables website by Hannah Perner-Wilson -- they’re really easy to make)Sensors used to detect costume animation by the actor wearing the costume1. A 3-axis accelerometer and compass placed in the body of the costume will report movement and orientation data. We will use this data to model Leonard’s general whole body movement and traveling through space.2. Two fabric bend sensors installed at the seam where the arm is attached at the body will report the relative position and activity rate of Leonard's arm gestures. This data will be used to model Leonard’s arm movement/gestures.In total, the two sets of sensors will provide us with 6 dimensions of data from which we can infer Leonard’s movement quality and general emotional state. For example, we will be able to distinguish smooth movement from jerky movement, as well as linear movement patterns from angular movement patterns. This information can then be used to describe Leonard’s behavioral state (example: excited vs. calm).Sensors used to detect costume animation by actors and audience members who are interacting with the costume externallyActors/audience members will be able to perform the following actions with Leonard (amongst other things)…* hugging Leonard (Leonard will hug the person back and trigger bend sensors in both hands)* shaking or holding Leonard's hands (this will activate a bend sensor in one hand)* playing with, brushing, or adjusting Leonard's teeth (this will generate activity in bend pressures sensors placed in a tooth)* scratching the underside of Leonard's ears (this will activate the bend sensors located at the crease of the ears)These sensors will be used primarily to manipulate forms of digital media that reinforce the semantics associated with playing and cuddling a stuffed toy. The resulting media artifacts that emerge are intended to be humorous, child-like, annoying, playful, lovable, and cute.
System description regarding sensing and processingAll sensors inside the costume will be wired to the MidiTron Wireless for broadcasting sensor data to computers for processing. The MidiTron Wireless will provide Leonard access to the entire theater space without loss of interactive capabilities, or the need for cumbersome cables that would prevent seamless interaction and accessibility to members throughout the entire audience space.We will route the MIDI sensor data to MAX/MSP and custom-built software for real-time filtering and movement analysis. We intend to use the processed data to manipulate audio and possibly video in the performance and audience space.
Case Study: The Performance Context For LeonardLeonard is one part of a larger funded work titled Case Study. While the online component of the piece is currently under development, the live performance, in which the costume will be used, will take place February 27th-March 1st.The current plot of Case Study revolves around a woman, Diane Murdock, who is a part of a case study organized by a group of researchers known as Sector 16 (check out the Sector-16 website). Sector-16 is a mysterious psychology research team that performs case studies on unknowing participants to better understand the complicated human mind. Sector-16’s first pet project is Diane. Sector-16 finds Diane enticing as a case study subject because she suffers from a long list of behavioral disorders including social anxiety disorder, dissociation disorder, and intermittent explosive disorder. More importantly, she refuses to leave her house. S-16 has full control over Diane’s physical surroundings; however, Diane is oblivious to their presence in her life. The live performance of Case Study will be Sector-16’s first public lecture/demonstration about their research. (Click here for more information about Case Study and links to the different online portions of the piece)So who is Leonard in the context of this piece??? Leonard is Diane’s imaginary friend, a life size version of her psychiatrist’s therapy puppet. Diane created Leonard in order to break up the long periods of isolation she encounters while secluded in her home. She also blames him for the messy aftermath of her frequent and violent rages she never remembers performing. Diane treats Leonard as a best friend and a scapegoat, though it is clear that he is more of a nuisance then a good friend. Whiney, excitable, and naïve, Leonard appears to act more like a child than the best friend of an adult woman. One reason for this childlike manifestation is due to Diane’s underdeveloped understanding of a relationship, whether it be friendly or romantic. This underdeveloped concept of a friend will be manifested in the media feedback generated by Leonard. The sounds associated with Leonard will consist of childlike clicks, pops, squeaks, gurgles, grunts, and laughs. The resulting sound will be a mix of child and children’s toy noises.Other interesting and notable informational tidbits about the Case Study creative/development processWe are using and extending a couple of tools as a part of the creative/development process. The first tool is Rehearsal Assistant which we are planning to use to play media in rehearsals as well as record rehearsal footage. The second tool is MetavidWiki, an extension to MediaWiki, which we are planning to use for archival and collaborative annotation of rehearsal footage.In our blogs, we will share our experience in using and extending these tools. So far, we have experimented with some uploading of rehearsal footage to MetavidWiki, but need to work out problems in the re-encoding of video. So look for more info and footage supporting this aspect of the piece in future blogs.I suppose that is it for now. We are in the process of building the internal foam structure of Leonard’s head, so pictures will be soon to follow!!!Jessica