(page still very much in progress..)
Make a system that analyzes an incoming camerastream and visualizes the information extracted from it in real-time. The software should give an interesting interpretation of what happens in front of the camera.
[to be finished:
– think of alternative forms of representation / question camera image
think of stream of data and visualization of features of that data
– basics of analysis and synthesis, processing
– ‘interesting’: different, esthetics ]
For those with little experience in programming in general of programming image analysis and synthesis, the final assignment is a large one and the learning curve to get to the right level of programming can be quite steep. To help you structure the process of learning the necessary programming skills, three smaller assignments will be given during the first four lab sessions of this year’s course. These assignments lead up to the final assignment. They have to be completed before the next lab day and emailed to Atze de Vries, the student assistant for this course, at embodiedvision (at) interfaculty (dot) nl.
The small assignments are:
before may 7th:
make a camera-based fader. A user should be able to control a value by moving in front of a camera. You are free to choose what aspect of the stream of images your software reacts to.
before may 14th:
Make an image-generator that is controlled by hand movements in front of a camera.
before may 21st:
Make a visualization of the amount of movement in a camera-stream over a period of at least three seconds.
goals of the final assignment:
This assignment asks you to reflect on what perception is and translate your interpretation of the subjects discussed in the lextures into a concept for software. In a more practical sense this assignment is meant to familiarize you with image handling techniques and elementary image analysis algorithms.
There is a deliberate vagueness in using the word ´interesting´ in the description of assignment; this vagueness is meant as an open space in which to formulate your own goal or develop an approach by way of experimentation.
– often you don’t need very sophisticated computer vision algorithms to get good results; clever use of simple analysis is more elegant and more interesting. Object following and recognition is easy for humans but very hard for computers, so it is advisable to think of an approach in which knowledge of objects is not necessary.
– it is fine to make software that is designed to handle only a certain type of image input as long as you make this explicit. Also what a webcam in a laptop sees is not always very rich or interesting and you are welcome to choose richer types of input, such as clips or webcams elsewhere. You should then also provide an example of your ideal type of input. It is not permitted to make software that only works with one particular source clip.
– you are welcome to think about a sound dimension to your system. Sound is not strictly part of this assignment, though.
For this assignment you are encouraged to use the Max/Msp/Jitter environment. Like Msp, Jitter is an addition to the original Max graphical programming language. Jitter is a collection of objects for matrix manipulations, designed to work with real-time video.
It is possible to use another programming environment, as long as the following requirements are met:
– the programming environment should be powerful enough to do the assignment in it
– you should be able to figure out your implementation without the student assistant (who might be willing to also help with projects using Processing and OpenFrameworks if you´re kind to him)
– it should be possible for you to share both a working version of your software as well as a readable (annotated) version the source that enables me to figure out your approach.
Other possible environments include pd/gem (similar to max/msp/jitter, but free and possibly less coherent), vvvv (free, windows only), processing (free, java based), nodebox (free, mac only, python based), OpenFrameworks (free, C++-based) or ofcourse plain Python, Java, Ruby, C++, objective-C or any language of your choice.
Atze de Vries will be the assistant for this course and he will be available to help you with the Max/Msp/Jitter part. He will also give an introduction to programming in Max/Msp/Jitter. Towards the end of the course Joost Rekveld will be available to answer questions, discuss approaches and ideas relating to the assignment.
presentation and grades:
The assignment should result in a short paper, a short video, a working piece of software, and a short presentation. The final assignment will only be graded if the three smaller assignments have been handed in.
The paper should clearly explain the concept and the principles behind the implementation. It should be between 2 to 4 pages long, and sent to me via email ( joost.rekveld (at) interfaculty (dot) nl ) in either .odt, .doc or .pdf format. Please use the IEEE guidelines to format your paper.
The software should be available for download with all that is necessary to make it function on Mac, Windows or Linux (It is not necessary to have versions for both mac and windows). If you work in jitter, please export your project as a collective (which will be cross-platform if you use no third-party externals).
Please put your video online and post the url on the forum.
During the presentation on the 11th of June the video should be shown and the software should be working and demonstrated. The concept and the principles behind the implementation should also be explained. I should have the paper and the software by the 18th of June.
[brakhage, belson, data visualization]
Good introductions to the subject are:
– William C. Wees, “Light Moving in Time”