assignment 2010/2011

final assignment:

Make a piece of software that edits a short movie out of a longer clip of source footage. The edit should be based on an analysis of the source footage and the editing choices should offer an interesting interpretation of the source footage. Make a short video demonstrating the successful use of your software. You can do this assignment on your own or with one other person.



This assignment was inspired by the “Automatic Trailer Generation / Semantic Video Patterns” project carried out at the ‘Digital media’ program at the University of Dresden. This group has been studying how to automatically generate movie trailers out of an analysis of the feature film needing to be advertised. For this, they concentrated on action movies, since both action movies and their trailers are highly codified forms of cinema in which camera movement, explosion detection and gunshot detection can already come a long way towards decoding the basic structure of the film.

smaller assignments:

To help you, three smaller assignments will be given during the first four lab sessions of this year’s course. These assignments lead up to the final assignment. They have to be completed before the next lab day and emailed to Atze de Vries, the student assistant for this course, at embodiedvision2011 (at) gmail (dot) com.

The small assignments are:

before april 18th:
make a system that plays a videoclip and which combines the current image with the image retrieved one second ago. For the combination of the two images you should use one of the binary operators in jit.op.

before may 2nd:
Make a camera-based fader. A user should be able to control a value by moving in front of a camera. You are free to choose what aspect of the stream of images your software reacts to.

before may 9th:
Make a system that selects ´good´ image sequences from a videoclip, according to a criterion you invented.

goals of the final assignment:

This assignment makes the question behind the “Automatic Trailer Generation” approach more general: how can we automate the making of editing decisions ?
On a practical level this assignment is meant to familiarize you with image handling software and elementary image analysis techniques.
On another level there is a deliberate vagueness in using the word ´interesting´ in the description of assignment; this vagueness is meant as an open space in which to formulate your own goal. There is no pressure to use (or pretend to use) a top-down approach in which you start with a clearly defined idea; and where your implementation and the resulting video are the consequence of this first idea. If it works like that for you that is great, but it is also perfectly fine to start with a small idea and see where that takes you. During this process you’ll probably be adjusting the ‘big picture’ to the detail and the detail to the ‘big picture’ as you go along. So please feel free to adjust or even formulate your goal along the way, as long as you document this research process and make these adjustments explicit.
Joost will provide a jitter-patch that can handle the rendering of the final clip, so that while making this assignment you can concentrate on the automated editing decisions.

some suggestions:

- often you don’t need very sophisticated computer vision algorithms to get good results; clever use of simple analysis is more elegant and more interesting. Object following and recognition is easy for humans but very hard for computers, so it is advisable to think of an approach in which knowledge of objects is not necessary.

- it is fine to make software that is designed to handle only a certain type of footage as long as you make this explicit. It is not fine to make software that only works with one particular source clip.

- think about rules for the transition between shots. A good way to become more aware of the rules of video editing is to analyze a commercial or a fragment of a film. Try to locate every cut in the fragment you chose and write down the essence of each shot. Look at the movement around the cuts.

- instead of having jitter to render your video, you could also have jitter produce an edl file. An edl file contains editing instructions that editing packages like Final Cut Pro or Premiere can read. In this way you can use these packages to render your piece.

- please think about the sound part of your video. It is logical to also have your software edit the sound based on an analysis of the source. The sound is not strictly part of this assignment, though.


For this assignment you are encouraged to use the Max/Msp/Jitter environment. Like Msp, Jitter is an addition to the original Max graphical programming language. Jitter is a collection of objects for matrix manipulations, designed to work with real-time video.
It is possible to use another programming environment, as long as the following requirements are met:
- the programming environment should be powerful enough to do the assignment in it
- you should be able to figure out your implementation without the student assistant (who might be willing to also help with projects using Processing and OpenFrameworks if you´re kind to him)
- it should be possible for you to share both a working version of your software as well as a readable (annotated) version the source that enables me to figure out your approach.
Other possible environments include pd/gem (similar to max/msp/jitter, but free and possibly less coherent), vvvv (free, windows only, not sure about file handling), processing (free, java based), nodebox (free, mac only, python based), OpenFrameworks (free, C++-based) or ofcourse plain Python, Java, Ruby, C++, objective-C or any language of your choice.


Atze de Vries will be the assistant for this course and he will be available to help you with the Max/Msp/Jitter part. He will also give an introduction to programming in Max/Msp/Jitter. Towards the end of the course Joost Rekveld will be available to answer questions, discuss approaches and ideas relating to the assignment.

still from “A Movie” by Bruce Conner (1958)

presentation and grades:

The assignment should result in a short paper, a short video, a working piece of software, and a short presentation. The final assignment will only be graded if the three smaller assignments have been handed in.
The paper should clearly explain the concept and the principles behind the implementation. It should be between 2 to 4 pages long, and sent to me via email ( joost.rekveld (at) interfaculty (dot) nl ) in either .odt, .doc or .pdf format. Please use the IEEE guidelines to format your paper.
The software should be available for download with all that is necessary to make it function on Mac, Windows or Linux and with any source material (It is not necessary to have versions for both mac and windows and it is ok to restrict the input material to a few common video formats). If you work in jitter, please export your project as a collective (which will be cross-platform if you use no third-party externals).
Please put your video online and post the url on the forum.
During the presentation on the 23rd of May the video should be shown and the software should be working and demonstrated. The concept and the principles behind the implementation should also be explained. I should have the paper and the software by tuesday the 24th of May 10am.

still from “Piece Touchée” by Martin Arnold (1989)


There is a rich tradition of so-called ‘found footage‘ films. Filmmakers in the genre include Bruce Conner (who made the classic “A Movie“), Martin Arnold, Craig Baldwin, Abigail Child, Cecile Fontaine, Al Razutis, David Rimmer and Michael Wallin.
Filmmaker and collector Rick Prelinger started his online Prelinger Film Archive with found footage filmmakers in mind.

Good introductions to the subject are:

- William C. Wees, “Recycled Images: The Art and Politics of Found Footage Films”, Anthology Film Archives, New York: 1993
- “The Recycled Cinema