Synapse Blog

Synapse Blog

Using Hadoop for Video Streaming

Internet Memory supplies a service to browse archived Web pages, including multimedia content. We use Hadoop, HDFS and HBase for storing and indexing our data, and associates this storage with a Web server that lets users navigate through the archive and retrieve documents. In the present post, we focus on videos and detail the solution adopted to serve true streaming from HDFS storage.

Basics

Many video formats are found on the Web, including Windows Media (.wmv), RealMedia (.rm), Quicktime (.mov), MPEG, Adobe Flash (.flv), etc. In order to display a video, we need a player, which can be incorporated in the Web browser. The player depends on the specific video format, but most browsers are able to detect the format and choose the appropriate player. Firefox for instance comes with a lot of plugins, which can be quickly integrated in the presence of a specific video to display it content.

There are basically two ways to play a video. The simplest one is a two-steps process: first the whole file is downloaded from the Web server to the user’s computer, and then displayed by the player running the local copy. It has the disadvantage that the download step may take a while is the file is big (hundreds of megabytes are not uncommon). The second one uses (true) streaming: the video file is split into fragments which are sent from the Web server to the player, giving the illusion of a continuous stream. From the user point of view, it looks as if a window is swept over the video content, saving the need of a full initial download of the whole file.

Obviously, streaming is a more involved method because it requires a strong coordination between the components involved in the process, namely the player, the Web server, and the file system from which the video is retrieved. We examine this technical issue in the context of a Hadoop system where files are stored in HDFS, a file system dedicated to large distributed storage.

File seeking with HDFS

At explained above, streaming requires a strong coordination between the Web server and the file system. The former produces requests to access chunks of the video file (think to what happens when the user suddenly requires a move to a specific part of the video), whereas the later must be able to seek in the file to position the cursor at a specific location. When using HDFS, enabling such a close cooperation turns out to be a problem because HDFS can in principle only be accessed through a Hadoop client, which the standard Apache server is not. We investigated two possible solutions: Hoop, the Hadoop web server, and Apache/FUSE.

Hoop (see http:///cloudera.github.com/hoop/) is an HTTP-HDFS-Connector. It allows the HDFS file system to be accessed via HTTP. A working local prototype has been developed using JW Player and a large video file. Streaming works, but seeking in an unbuffered part results in the playback stopping. It seems that the Hoop API does not support seeking in a file, so we had to give up this approach.

The second solution is based on HDFS/FUSE. FUSE (File System in User Space) is an API that captures the file system operations and allows to implement them with ad-hoc functions running in the the user’s processus space (thereby saving the need to change the operating system kernel, a tricky and dangerous option). FUSE is provided in Hadoop as a component named “Mountable HDFS” (see http://wiki.apache.org/hadoop/MountableHDFS). It lets the standard file system user or program see the HDFS name space as a locally mounted directory. All file system operations, including directory browsing, file opening and content access, are enabled over HDFS content through the FUSE interface.

Apache server configuration

It remained to configure Apache to access the mounted FUSE system and load content from video files. How this is done depends on the video format. At the moment, we tested and validated .mp4 files and Flash video files. For the first format we use H264 Streaming Module (see http://h264.code-shop.com/trac), an Apache plugin, which enables adaptive streaming. For FLV we used pseudo-stream module for Apache named “mod_flv”. Both behave nicely and go along with the mountable HDFS without problem.

Conclusion

The solution based on Apache + Mountable HDFS (FUSE) turned out to be both reliable, functionally adequate (seeking is well supported) and efficient. The architecture is simple and easy to set up, and allows to combine the benefits of HDFS for very large repositories and standard Web server streaming solutions. Although we chose to adopt Apache plugins in our current service, nothing keeps you from using a more powerful streaming server since the FUSE approach (virtually) moves all the HDFS content in the standard file system scope.

Hoop remains a potential option for the future, but it appeared not mature enough when we tested it, at least for the complex operations (seeking at a specific offset in a file) required by video streaming.

 

by: Philippe Rigaux,
 

Comments

I like to learn more about your system

(Danny Hermanus, 2012 02 11)


Philippe,


what happened when you tried to seek? If it didn't work then those two modules weren't used. I'd like to use this across multiple networks as Wowza Media Server (for me at least) is a complex set up when it comes to scaling.

(Martin Corona, 2012 02 22)


Very well explained. Thanks for the post.

(icecube_media, 2012 04 23)


can we write/read in to hdfs directly in to hdfs even if we are using a local user directory path will it be reflected in hdfs for example i have a local directory named temp inside that i have multiple folders by mounting temp in to hdfs will all my folders be available in hdfs folders ???

(visioner.sadak, 2012 09 11)


I think, MAPR is ready to use for video streaming.
They have native NFS for HDFS link, write and read,.
We can use mapr M3, the free on.
for more information go to www.mapr.com ( I am not working at that company, but I installed the system already), but never have a chance to test the video streaming.

(Danny, 2013 03 19)


Danny,

Can you explain in your own words what you mean with: "They have native NFS for HDFS link, write and read"? Do you have an idea about I/O rates?

(Robert, 2013 04 24)


Hi,

Thank you for the information.

Could you please explain the installation and configuration steps of these in a Linux machine, RedHat or Ubuntu??

(Vagabond, 2013 05 07)


Regarding installation on Linux systems, please have a look at specialized sites, the issue is orthogonal to what is described here, and the solution should work anyway. For the record, we use Debian.

(Philippe Rigaux, 2013 06 20)


Thank you for publishing your experience with HTTP-HDFS-Connector and Hoop. I am working on an idea the same as what you made, not for streaming purposes.
Good luck.

(amir, 2013 10 10)


Add your comment…

 

Remember my personal information

Notify me of follow-up comments?

Please enter the word you see in the image below: