The following examples intend to exhibit the use of the proposed framework to generate hypervideos, and to illustrate some of the developped components. Theses samples make reference mainly to two videos annotated by the Advene prototype: Murnau's Nosferatu and Tim Berners-Lee talk at TED 2009. See below for further technical details.

WebCHM Set of Samples

Video Subtitles

This sample shows a usual case of textual subtiting consisting in the transcription of the video speech, displayed in a synchronized way with the video.

Timeline

This example shows the use of a hypervideo timeline to show meaningful events and features of the hypervideo story in a graphical manner.

Table of Content

The use of hypervideo tables of content is shown in this sample to give direct access to parts of the video, as can be found in all classical ToCs.

Map

In this sample, a graphical index denoted by a map component is used as a shortcut into parts of the hypervideo story.

Enrichment

The enrichments are components that enrich the information rendered by the video player by means of graphical and textual content.

Two Independant Players

A hypervideo may consist in different sub-documents. Each sub-document constitutes a hypervideo and may be temporally independent from the other sub-documents. This sample shows two hypervideos, each one containing some particular features. The running of each one is independent from the other.

Table of Contents and Subtitles

This hypervideo combines two rendering component: a table of contents along with the video subtitles.

The Tim Berners-Lee Talk sample

This last sample shows how hypervideo can be created in order to provide more insights into the content of the Tim Berners-Lee talk through an augmented and interactive video-based presentation. Many hypervideo reading and rendering components are used to more deeply explore the content of the video.

The Nosferatu Hypervideo sample

This last sample shows many hypervideo visualization artifacts to more deeply explore how a hypervideo may help in the analysis and understanding of a video.

Technical details

Both of these videos are already annotated and the resulting packages can be retrieved from the Advene website. The annotation date has been exported into JSon files.

We recommend to study the structure of these files by looking how they are produced by Advene.

In both cases, the data reader used is a JSonReader component. The last example makes use of a custom reader, a DataReader component, to retrieve online information from Wikipedia.

Thanks to the website export functionality offered by Advene, a cache of screenshots corresponding to the timecodes defined by the annotations is available. This allows us to directly use these resources within the developped samples.