Presentation Feedback Notes

Jet lag is settling in…send help! The questions portion of my biofeedback project was so helpful. Insights and recommendations were shared that I had not yet considered. I wanted to share them here so we don’t forget moving forward:


- Will there be discrete data driving particular outcomes? In other words, will we identify specific brainwave readings that manipulate specific camera functions? For example, if the actor is focusing with the camera focus change, if the actor is blinking will the camera aperture change? Furthermore, is it a priority that the actor can control these functions intentionally? Will the actor have to be “trained” to produce a repeatable/consistent brainwave output? 


- I need to follow up with this commentor, but someone brought up a new camera technology that pairs the user’s gaze to focus / depth of field. If I followed correctly, a conversation followed that offered a DIY-cheaper solution that would be producing a similar effect on the post-processing side of the footage using blur filters.


- The limitations of the raspberry pi camera came up a couple of times. One suggestion was to use the kinect for image capture because of its vast and dramatic capabilities as a depth camera. Of course, the portability and autonomy of the small pi camera module was its draw for me. Perhaps mods to the camera, or the implementation of openCV can give the project a wider range of possibilities for video captured.


- Someone brought up the relationship of the audience to the system… engaging the audience in a feedback loop between actor - camera - and audience sounds like a really interesting direction.


I think that’s about it, if you have any contributions please post!


See u soon, Jessica

Tipo de post