A variant of real-time streaming is progressive streaming, also called HTTP streaming because it uses the commonly used HTTP protocol and standard HTTP servers to deliver media files. Progressive streaming enables files to be watched as they are downloaded. When the client makes a request (HTTP) to the server for the media file, the file eventually gets stored in the client's memory buffer. The playback is allowed before the entire file gets downloaded. Most firewalls allow traffic over HTTP whereas RTP is not permitted by most firewalls. In our approach, we're emulating HTTP streaming.
The JMF API specifies a simple, unified architecture to synchronize and control audio, video, and other time-based data within Java applications and applets.
the basic concepts of JMF, including a few useful classes required to build a Web conferencing application:1. The DataSource class is an abstraction that represents audio, video, or a combination of both. A data source can be a file or a stream and is constructed from the Manager and MediaLocator as follows:
DataSource ds = javax.media.Manager.createDataSource(mediaLocator);
Here, MediaLocator is a class that JMF uses to represent audio or video media location and is created as follows:
MediaLocator mediaLocator = new MediaLocator("vfw://0");
2. The Player class is used to play media files or stream media. A player is constructed from MediaLocator or the media URL as follows:
Player player = Manager.createPlayer(mediaLocator);
Once the player is realized (ready to play state), you can call player.start() to play the media. A realized player can be created from the DataSource:
Player player = Manager.createRealizedPlayer(ds);
3. A processor is a type of player. Besides playing the media, it can also output media through a DataSource to another player or processor. A processor is used to manipulate the data and convert the data from one format to another. It's created from the DataSource, MediaLocator, or a URL:
Processor processor = Manager.createProcessor(new URL("http://localhost/test.mov));
4. A manager is one of the most important classes of JMF. It handles the construction of players, processors, and DataSources, as we have seen earlier.
Architecture Description
The architecture has one centralized server and one or many distributed clients. The server has a Web server and a servlet container. Clients run two applets, one for capturing media and the other for playing the media.The high level steps are:
1. The applet continuously captures video and audio streams from the Webcam. These streams are saved locally in a specified format as a file every few seconds. This file is uploaded to the server over HTTP using a file upload servlet. Uploading uses a separate thread. A significant loss of frames will result if the file upload is in the same thread as file capture. Note that a more efficient method would be to write these streams directly on the server using a socket. This is currently not possible because the DataSource class provided with JMF does not contain a method to get the InputStream2. A server gets a new file clip from the sender and stores it in a sender-specific directory. A counter, such as filename+i, is attached to the filename.
3. The JMF Player applet continuously downloads new files from the Web server. It uses JMF's perfecting capability to play these clips in a continuous manner. When the current clip is being played, a new instance of Player is created for the next clip and the next clip is downloaded from the server. This makes the playing of clips continuous, as the next clip to be played has already been prefetched. Note that the entire clip is downloaded by the player applet before playing it.
At the start of playing and during the process of fetching a new clip, the player applet checks new file availability for n seconds before timing out.
Computation of Parameters
First, we'll do an approximate mathematical analysis for bandwidth consideration and demonstrate the usability of our approach with a few special cases.Suppose:
One second file clip size = oneSecFileSize bits
Time duration of each clip = cSec seconds
Upload Transmission rate = uRate bits per second
Download Transmission rate = dRate bits per second
Time to upload, tUpload = oneSecFileSize *cSec/uRate
Time to download, tDownload = oneSecFileSize *cSec/dRate
If the time to upload or download a clip is more than the time to play a clip, the player will wait and the receiver will see a break, i.e., max(tUpload,tDownload)>cSec. For the continuous playing of clips, the following condition must be true:
Max (1/uRate, 1/dRate) > 1/ oneSecFileSize
Min (uRate, dRate) > oneSecFileSize
According to the equation, the wait time between clips at the receiver does not depend on clip size. The only variable that matters for a continuous playback is the size of a one-second file and that the provided upload and download rates meet the above condition. Lag time between playing and capturing is:
cSec + tupload + tdownload
From the above equation, the maximum lag with no break in the feed is 3*cSec, and the minimum lag is cSec.
To get a Web conference that is as close to real time as possible, cSec should be reduced. Next, we will apply the above analysis to the following cases.
Let's assume the uRate = dRate = 20K bits/sec. In this case, the one-second file size should be less than 20Kbits. If the clip size is 10 seconds, the maximum playback lag will be 30 seconds. We have observed that the minimum file size for transmitting a one-second video (with no audio) is 8Kbits using H263 encoding and 128x96 pixels video size. H263 encoding is ideal for a low-bandwidth environment because it produces smaller file sizes. The H263 encoder in the JMF 2.0 is capable of handling only limited video sizes (only 352x288, 176x144, and 128x96). We observed a minimum file size with the video and an 8-bit mono audio with an 8000Hz sampling rate to be 80Kbits.
Let's assume that the lower rate is rate = 20Kbits/sec and the other rate is much higher. In this case the one-second file size should be less than 20Kbits, but the maximum playback lag is about 20 seconds if the clip size is 10 seconds.
In this case better quality video can be transmitted. The playback lag will be the same as the clip size in seconds. JPEG encoding offers good quality video and is well suited to a high-bandwidth environment. File sizes can be decreased during JPEG encoding by lowering JPEG quality.
There are no easy guidelines to predict the exact size of the one-second clip; it depends on the video size, the audio sampling rate, video and audio encoding, the frame rate, and the file format. Users should experiment with using different values for these parameters and a variety of movement in the video to determine an approximate one-second file size.
JMF Capture Applet
The high-level steps for developing capture applet are (see Listing 1):
- A DataSource is created from the Webcam source using the MediaLocator.
- A ProcessorModel is created from the DataSource, the format object specifying the video format, and the FileDescriptor object specifying the output file format.
- A Processor is created from the ProcessorModel and the output DataSource is obtained from the Processor.
- A DataSink object is created by first creating a MediaLocator for storing the media in a file.
- Capture of the stream is started and the stream is saved for a specified duration into a file.
File Upload
The File Upload has two parts: the file upload thread at the client and the upload servlet at the server. The following are the steps for developing the File Upload thread (see Listing 2):
- Create a socket connection with the server.
- Create an HTTP POST request and an HTTP head and tail.
- Create necessary IO stream objects.
- Send an HTTP request to the server. Write the HTTP head, the clip file, and the HTTP tail to the server.
JMF Player Applet
The high-level steps for developing a player applet are (see Listing 3):
- Construct two players from the URL of the media at the Web server. One player is for the current clip and the other is for the next clip.
- Start the first player and fetch the next clip using the second player.
- On the EndOfMediaEvent for clip i, start playing clip i+1. Destroy the visual component for the player of clip i, de-allocate the player, and create a new player for clip i+2. Prefetch the clip i+2 and add ControllerListener. Repeat these steps for subsequent clips.
HTML Code for Sender and Receiver Applets
The HTML code for the sender applet is shown in Listing 4.
The HTML code for the receiver applet is shown in Listing 5. Note that this HTML page is generated dynamically with the appropriate senderID and current counter. If the receiver wants to receive multiple feeds, multiple applet entries are generated in HTML.
Drawbacks
- There is a lag between capturing and playing.
- It involves expensive disk write operations.
- Both receivers and senders must have JMF software installed.
Future Enhancements
- A sophisticated in-memory buffering mechanism to allow better video quality and efficient delivery by eliminating expensive disk writes
- Extending the DataSource class to allow InputStream-based processing to save the media directly at the server and remove the need for a local buffer.
- To package and deliver required dlls and registry files of JMF so that there's no need to install JMF software.
Thus far, we have described an HTTP-based approach that involves no real-time streaming. An alterna-tive to the above approach is a peer-to-peer, RTP-based Web conferencing solution that can be developed using the JMF API. The source code for the RTP Server/Sender can be found at java.sun.com/products/java-media/jmf/2.1.1/solutions/AVTransmit.html and at java.sun.com/products/java-media/jmf/2.1.1/samples/sample-code.html#RTPPlayerApplet for the RTP Player applet. The RTP Server captures the media from the Webcam and streams it to receivers by specifying IP addresses and port numbers. The RTP Player listens on a specific port for streams coming form the sender's IP address.
The primary difference between the HTTP approach and the RTP approach is that RTP streams the feeds continuously to receivers without storing them in files locally or at the server. The disadvantages of the above RTP approach are:
- Public IP addresses are required for both the sender and the receiver.
- The senders and receivers should not be behind firewalls because RTP is not allowed by most corporate firewalls.
- Also, as the number of participants increases, the number of ports also increases linearly. This makes user and port management challenging.
- The default RTP implementation of JMF uses the unreliable UDP protocol, so delivery time and quality are not guaranteed - it may result in the dropping of frames or make frames out of sequence during transmission.
The architecture of the Web conferencing system using commercially streaming servers is shown in Figure 2. Senders first register a unique broadcast/mount point with a user management component as shown by arrow 1. The sender then uses streaming protocol (for example, RTP or RTSP) to push the media stream to a centralized streaming server, as shown by arrow 3. Receivers first look for senders at the user management component (as shown by arrow 2) and obtain corresponding broadcast addresses. Receivers then request and receive media from the streaming server, as shown by arrow 4.
In this approach we don't need to break the feed into smaller clips. Senders use the encoder to stream the media to the server. The server then streams the media to receivers. The disadvantages of this approach are:
- The architecture is heavy as it involves the use of costly and complex streaming servers, players, and encoders
- It's not an open architecture. The architecture becomes specific to one particular system such as RealSystem, which makes it nonportable with solutions from other vendors.
- Capture programs, a.k.a. encoders, are not readily available and are not browser based.
- It uses special streaming protocols such as RTP or RTSP, which are not allowed through a firewall.