PXLab Manual

[Home]    [Content]    [Previous Chapter]    [Next Chapter]


Sound and Video

Currently PXLab's capability to present sound stimuli is somewhat limited. The following sound objects can play sound files, can create and play synthetic sounds, and are able to record sound. Included is a voice key object which may also be used as a timer.

SoundFile
plays a wave file.
SyntheticSound
plays various synthetic sounds with defined envelopes and wave types.
SoundRecorder
records sound from an arbitrary sound input channel.
SoundRecorderControl
controls recording as a background task which runs across multiple displays.
VoiceKey
records sound from a microphone and measures the time to threshold.

Sound display objects do not affect the screen content. This means that while a sound display object is running the screen remains in the state it had before the sound object started. Here is an example for using synthetic sound objects in a paired comparison task trial:

    Trial(TrialCounter, SyntheticSound:A.WavePars, SyntheticSound:B.WavePars, Message:C.ResponseCode) {
      Message:A() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
	Text = "<==";
      }
      SyntheticSound:A() {
        Timer = de.pxlab.pxl.TimerCodes.END_OF_MEDIA_TIMER;
        Duration = 600;
        Gain = 0.8;
	Envelope = de.pxlab.pxl.SoundEnvelopeCodes.SINUSOIDAL;
	Wave =  de.pxlab.pxl.SoundWaveCodes.PURE_TONE;
      }
      Message:B() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
	Text = "==>";
      }
      SyntheticSound:B() {
        Timer = de.pxlab.pxl.TimerCodes.END_OF_MEDIA_TIMER;
        Duration = 600;
        Gain = 0.8;
	Envelope = de.pxlab.pxl.SoundEnvelopeCodes.SINUSOIDAL;
	Wave =  de.pxlab.pxl.SoundWaveCodes.PURE_TONE;
      }
      Message:C() {
        Timer = de.pxlab.pxl.TimerCodes.RESPONSE_TIMER;
	Text = "?";
	ResponseSet = [0, de.pxlab.pxl.KeyCodes.LEFT_KEY, de.pxlab.pxl.KeyCodes.RIGHT_KEY];
      }
      ClearScreen() {
        Timer = de.pxlab.pxl.TimerCodes.CLOCK_TIMER;
        Duration = 300;
      }
    }

There are two sound objects presented in sequence and each has its own screen content which is defined by a preceding Message() object. For the first sound the screen shows a left arrow and for the second sound the screen shows a right arrow. The subject's response is only collected after the second sound object is finished and while the Message:C() object shows a question mark on the screen. No responses are accepted as long as the sounds are playing.

Synthetic Sounds

The button below starts a demo of the various synthetic sound types.

A synthetic sound object SyntheticSound is defined by an envelope function modulating a base wave type. Envelope functions may be defined independently for the two available channels.

Envelope typs are defined by parameter SyntheticSound.Envelope and the envelope parameters are defined by SyntheticSound.EnvelopePars. If the sound signal has two channels then the channels my have independent enevlope functions. In this case SyntheticSound.Envelope may be an array with two envelope type values. If two independent enevlopes are to be defined then the parameter SyntheticSound.EnvelopePars contains parameters of both channels. The parameters for the left channel come first and the parameters for the right channels follow.

The following envelope functions are supported:

CONSTANT
A constant function.
GAUSSIAN
A Gaussian envelope whose maximum is at half of the sound duration. The parameter SyntheticSound.EnvelopePars contains only a single value, the standard deviation of the Gaussian.
GAUSSIAN2
A Gaussian envelope whose maximum position depends on a parameter. The parameter SyntheticSound.EnvelopePars contains the maximum position and the standard deviation.
SINUSOIDAL
A sinusoidal envelope which has no parameters.
LINEAR_ASR
An envelope which has a linear attack, sustain, and release period. The parameters are the attack and the release duration. The sustain duration is computed from the sound's total duration.
GAUSSIAN_ASR
An envelope which has a Gaussian attack and release period. The parameters are the attack duration, the standard deviation of the attack Gaussian, the release duration, and the standard deviation of the release Gaussian.
SINUSOIDAL_ASR
An envelope which has a sinusoidal attack and release period. The parameters are the attack and the release duration. The sustain duration is computed from the sound's total duration.

The following wave types are supported:

PURE_TONE
A sine wave with the frequency as its only parameter.
FOURIER_SERIES
A Fourier series. The Fourier series is defined by

S(t) = Sum(k = 1, ..., M) of [ak * cos(k * 2 * Pi * f0 * t) + bk * sin(k * 2 * Pi * f0 * t)].

The respective SyntheticSound.WavePars array contains the following values:

WavePars = [M, f0, a1, b1, a2, b2, ..., aM, bM]

The parameters must be such that the series strictly remains in the range (-2 <= y <= +2). Here is an example for a square wave with Gaussian envelope:

        Trial("Gaussian Envelope\nSynthetic Square Wave", 1, 0, 
		de.pxlab.pxl.SoundEnvelopeCodes.GAUSSIAN, [200.0],
                de.pxlab.pxl.SoundWaveCodes.FOURIER_SERIES, 
		[9.0, 400.0, 
		0.0, 1.0, 
		0.0, 0.0, 
		0.0, 0.3333, 
		0.0, 0.0, 
		0.0, 0.2, 
		0.0, 0.0, 
		0.0, 0.14286, 
		0.0, 0.0, 
		0.0, 0.11111], ?);

WHITE_NOISE_SOUND
Pure white noise. The parameter SyntheticSound.WavePars is not used in this case.

In every case a channel's signal is the product of the channel's envelope function with the wave function.

Synthetic sound objects always have to use a timer of type END_OF_MEDIA_TIMER.

Playing Sound Files

The sound object SoundFile plays uncompressed wave ('WAV') files. The duration of this display object is the duration of the wave file. Here is an example which plays two WAV-files in every single Trial:

The complete design file shows how simple it is to include sound file playing:

      SoundFile:A() {
        Timer = de.pxlab.pxl.TimerCodes.END_OF_MEDIA_TIMER;
	Directory = ".";
      }

At least a proper Timer and the Directory where the sound files reside have to be given.

Responses to Sound Objects

Objects which play sound have a fixed duration and use a media timer because the sound object's native duration determines the presentation duration. This may pose a problem when responses have to be collected while a sound object is still playing. Consider the following scenario: The stimulus is a synthetic sound which plays for 3000 ms. The subject may respond within these 3000 ms but also may only respond after the sound has finished. In this case we need a display object following the sound object which will wait until there is a response. Her is a somewhat complex solution to this problem:

Experiment() {

  Context() {

    AssignmentGroup() {
      ExperimentName = "Collect Responses While Playing Sound";
      SubjectCode = "pxlab";
      DataFileTrialFormat = "%SubjectCode% %FinalResponseTime%";
      TrialFactor = 3;
      ResponseTime = 0;
      new FinalResponseTime = (ResponseTime > 0)? ResponseTime: Trial.SyntheticSound.ResponseTime;
    }

    Session() {
      Instruction() {
        Text = ["Respond while a Sound is Playing",
		" ",
		"Press a response key while the sound is playing or wait until the sound is finished and then press a response key.",
                "The response time will be shown correctly in both cases.",
		" ",
		"Press any key now to start!"];
      }
    }

    Trial(SyntheticSound.ResponseTime, ResponseTime) {
      Message() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
	Text = "?";
      }
      SyntheticSound() {
        Timer = de.pxlab.pxl.TimerCodes.WATCHING_END_OF_MEDIA_TIMER | de.pxlab.pxl.TimerCodes.START_RESPONSE_TIMER;
        Duration = 3000;
        Gain = 0.8;
	Envelope = de.pxlab.pxl.SoundEnvelopeCodes.GAUSSIAN;
	EnvelopePars = 200.0;
	Wave = de.pxlab.pxl.SoundWaveCodes.PURE_TONE;
	WavePars = 1000.0;
      }
      Nothing() {
        Execute = Trial.SyntheticSound.ResponseCode==de.pxlab.pxl.ResponseCodes.CLOSE_MEDIA;
        Timer = de.pxlab.pxl.TimerCodes.LIMITED_RESPONSE_TIMER | de.pxlab.pxl.TimerCodes.STOP_RESPONSE_TIMER;
        Duration = 3000;
      }
      Feedback() {
        Evaluation = de.pxlab.pxl.EvaluationCodes.COMPARE_CODE;
        CorrectCode = de.pxlab.pxl.ResponseCodes.CLOSE_MEDIA;
        ResponseParameter = "Trial.SyntheticSound.ResponseCode";
	FalseText = "ResponseTime = %Trial.SyntheticSound.ResponseTime@i%\nCode = %Trial.SyntheticSound.ResponseCode%";
	CorrectText = "ResponseTime = %ResponseTime@i%\nCode = %Trial.Nothing.ResponseCode%";
        Timer = de.pxlab.pxl.TimerCodes.CLOCK_TIMER;
        Duration = 2000;
      }
    }
  }

  Procedure() {
    Session() {
      Block() {
        Trial(?, ?);
      }
    }
  }
}

The trial starts with a Message object showing a question mark an immediately followed by the sound object. The sound object's timer is defined to be

   Timer = de.pxlab.pxl.TimerCodes.WATCHING_END_OF_MEDIA_TIMER | de.pxlab.pxl.TimerCodes.START_RESPONSE_TIMER_BIT;

The WATCHING_END_OF_MEDIA_TIMER activates response event watching during the sound object's display interval and the START_RESPONSE_TIMER_BIT starts response time measurement across multiple display objects. If the response event happens while the sound object is being shown then the WATCHING_END_OF_MEDIA_TIMER mechanism stores the result in the sound object's response parameters.

The sound object is followed by a Nothing object which does not change the screen content. However, this object's Execute parameter is defined to be

   Execute = Trial.SyntheticSound.ResponseCode==de.pxlab.pxl.ResponseCodes.TIME_OUT;

such that it is execute only if the sound object had TIME_OUT as its ResponseCode value which means that there was no subject response. If the Nothing object is executed then it waits for a response and also stops the response timer:

   Timer = de.pxlab.pxl.TimerCodes.LIMITED_RESPONSE_TIMER | de.pxlab.pxl.TimerCodes.STOP_RESPONSE_TIMER_BIT;

The Feedback object finally shows the proper response code and the new defined parameter FinalResponseTime contains the resulting response time:

  new FinalResponseTime = (ResponseTime > 0)? ResponseTime: Trial.SyntheticSound.ResponseTime;

The global parameter ResponseTime is nonzero only if the Nothing object had been executed and stopped the response timer. If not then the sound object's parameter ResponseTime contains the actual response time.

The button below shows this demo.

Recording Sound

Both SoundRecorder and VoiceKey record sound from a source which should be defined at the operating system level as the system's sound input channel. Here is a view of a sound device mixer which selects the microphone as its input source:

The voice key object VoiceKey records sound until an input value is found which is larger than the value of the parameter VoiceKey.Threshold. The voice key then stops recording and finishes the display interval. An alternative way to use a voice key response is by setting an arbitray object's Timer parameter to VOICE_KEY_TIMER. A more detailed description of the voice key is contained in chapter ,Stimulus and Response Timing'.

Recorded sound is stored in a file whose name by default is derived from the subject code, a session and a trial counter:

   FileName = "%SubjectCode%_%SessionCounter%_%TrialCounter%.wav";

The VoiceKey object also can store the recorded sound into a sound file. This is done if the parameter VoiceKey.Directory is defined which by default is undefined for the VoiceKey object.

The SoundRecorderControl object is slightly different from SoundRecorder in that it may be used to control a single sound recorder which runs as a background task while other display objects may be shown. Background sound recording may be started by setting the parameter SoundRecorderControl.Command to STARTand it may be stopped by another instance of SoundRecorderControl which sets the parameter SoundRecorderControl.Command to STOP. All instances of SoundRecorderControl refer to the same sound recorder and the recording parameters and the file name must be set by that instance which starts recording.

It is not possible to demonstrate the voice key and sound recording here since the Java security manager does not allow applets to record sound.

Showing Movies and Other Time Based Media

Showing movies requires the Java Media Framework (JMF) package being installed on the client. Be sure to follow the installation instructions of the JMF package carefully in order to make JMF work properly. It seems to be important that the order of installation is correct: First install the browser, then install the Java Runtime Environment, and then install the Java Media Framework.

Here is an example which plays a short MPEG-1 movie containing video and sound. The demo only works if the JMF package has been installed properly.

Playing a movie is rather simple. Here is the previous demo's design file:

Experiment() {
  Context() {
    AssignmentGroup() {
      SubjectCode = "pxlab";
    }
    Session() {
      Message() {
        Text = "PXLab plays a movie";
        Timer = de.pxlab.pxl.TimerCodes.CLOCK_TIMER;
        Duration = 500;
      }
    }
    Trial(Message.Text, Movie.FileName) {
      Message() {
        Width = 1024;
        FontSize = 32;
        LocationY = 300;
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
      }
      Movie() {
	Directory = AppletSystem? ".": "c:/jpxl/www/doc/manual";
      }
    }
  }
  Procedure() {
    Session() {
      Block() {
        Trial("This is a PXLab movie", "pxlab-mpeg1.mpg");
      }
    }
  }
}


Playing the movie is done by the Movie display object. It accepts a directory and file name which contains the movie. The actual movie player is based on Sun's Java Media Framework software. The player used here is capable of playing many time based media files. Thus the Movie object can play not only movie files but also sound files using various types of compression methods depending on the codecs and JMF plugins being installed. This is different from the SoundFile object which can only play uncompressed wave files. Look into the JMF documentation for supported formats.

Here is a demo where the Movie object is used to play a sound file:

Experiment() {
  Context() {
    AssignmentGroup() {
      SubjectCode = "pxlab";
    }
    Trial(Movie.FileName) {
      Message() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
	Text = "File:  '%Trial.Movie.FileName%'";
      }
      Movie() {
	Directory = AppletSystem? ".": "c:/jpxl/doc/tutorial";
      }
    }
  }
  Procedure() {
    Session() {
      Block() {
        Trial("dieFrage.wav");
      }
    }
  }
}


The Movie display object accepts any URL as its media file name. Note, however, that for security reasons applets may only play media data which are located in the applet's location tree. This restriction does not hold for applications. The Movieobject has the following restrictions:

Here is another MPEG1 movie which has been created from a sequence of screen captures using the ScreenCapture display object. It simply shows a running sequence of numbers which exactly number the single frames of the movie. The frame rate is 25 Hz and there exactly are 25 frames.

In some cases it might be necessary to have better control of movie timing than is available with the Movie object. This is possible with three more media player objects:

  1. MediaPlayerOpen prepares the media player for playing a given media file. Preparation includes searching and opening the media file, caching media data and preparing the display screen for showing the media window. MediaPlayerOpen has the same experimental parameters as Movie since all visual parameters are set here.
  2. MediaPlayerStart actually starts the media player. Starting the player is as fast as possible given the way the media data have been prepared by MediaPlayerOpen. If this object's parameter MediaPlayer.FastOpen has been set then opening the player already shows a static view of the first movie frame and MediaPlayerStart can start the movie with a delay of down to 2 ms (at least on a 3 GHz PC). If MediaPlayer.FastOpen is not set then the delay may go up to 150 ms on a 3 GHz PC.
  3. MediaPlayerSync synchronizes display processing to the media stream. Two types of synchronization are possible: (1) wait until a certain media time is reached and (2) wait until the end of the media data file has been found. In order to wait for the end of the media data stream the Timer parameter of the MediaPlayerSync object must be set to MEDIA_STOP_TIMER if the synchronization object should wait until a certain media time has been reached its Timer must be set to MEDIA_SYNC_TIMERand the parameter MediaPlayerSync.MediaTime has to be set to the respective media time. The media time is given in milliseconds relative to the start of the media stream. Thus in order to synchronize to the start of the 16th frame of a 25 Hz video stream MediaPlayerSync.MediaTime must be set to 15*40 = 600 ms.

Here is an example of a 2 second MPEG1 movie which shows a drifting grating and also counts the video frames.

And here is the design file of this demo:

Experiment() {
  Context() {
    AssignmentGroup() {
      SubjectCode = "pxlab";
    }
    Session() {
      ClearScreen() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
      }
    }
    Trial(MediaPlayerOpen.FileName, MediaPlayerStart.StartDelay) {
      MediaPlayerOpen() {
	Directory = AppletSystem? ".": "c:/jpxl/www/doc/manual";
        FastStart = 1;
      }
      Message:Start() {
        Text = "Press the SPACE bar\n to start an MPEG1 grating movie";
        Width = 1024;
        LocationY = -240;
        FontSize = 32;
        Timer = de.pxlab.pxl.TimerCodes.RESPONSE_TIMER;
      }
      MediaPlayerStart() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
      }
      MediaPlayerSync:Stop() {
        Timer = de.pxlab.pxl.TimerCodes.END_OF_MEDIA_TIMER;
      }
      Feedback() {
        Text = "Start delay = %Trial.MediaPlayerStart.StartDelay% ms";
        FontSize = 32;
        Timer = de.pxlab.pxl.TimerCodes.CLOCK_TIMER;
        Duration = 2000;
      }
    }
  }

  Procedure() {
    Session() {
      Block() {
        Trial("grating_352x288x25_mpeg1.mpg", ?);
      }
    }
  }
}


The final example uses media synchronization in order to switch the screen's background color in synchronization with the movie content. The movie again is the 25 frame numbers movie with a small change: starting from frame 16 the background is white and the numbers are black. If synchronization works then the screen background should switch its color in synchronization with the movie background.

Here is the design file of this demo:

Experiment() {
  Context() {
    AssignmentGroup() {
      SubjectCode = "pxlab";
    }
    Session() {
      ClearScreen() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
      }
    }
    Trial(MediaPlayerOpen.FileName, MediaPlayerStart.StartDelay) {
      MediaPlayerOpen() {
	Directory = AppletSystem? ".": "c:/jpxl/www/doc/manual";
        FastStart = 0;
      }
      Message:Start() {
        Text = "Press the SPACE bar\n to start the MPEG1 'Numbers' Movie";
        Width = 1024;
        LocationY = -240;
        FontSize = 32;
        Timer = de.pxlab.pxl.TimerCodes.RESPONSE_TIMER;
      }
      MediaPlayerStart() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
      }
      MediaPlayerSync() {
        MediaTime = 15 * 40;
      }
      FillScreen() {
        Timer = de.pxlab.pxl.TimerCodes.NO_TIMER;
	Color = White;
      }
      MediaPlayerSync:Stop() {
        Timer = de.pxlab.pxl.TimerCodes.END_OF_MEDIA_TIMER;
      }
      Feedback() {
        Text = "Start delay = %Trial.MediaPlayerStart.StartDelay% ms";
        FontSize = 32;
        Timer = de.pxlab.pxl.TimerCodes.CLOCK_TIMER;
        Duration = 2000;
      }
    }
  }

  Procedure() {
    Session() {
      Block() {
        Trial("numbers_sync_352x288x25_mpeg1.mpg", ?);
      }
    }
  }
}


Note that movie timing in applets seems to be strongly affected by browser properties and the system environment. Thus in order to really test PXLab's timing properties these demos should be run as applications and not as applets. Experience shows that when running as an application the media player provides satisfactory timing precision for most applications. And also note that several video formats which are supported for applications are not supported by the JMF when running as an applet.

[This file was last updated on July 15, 2010, 12:07:01.]

[Home]    [Content]    [Previous Chapter]    [Next Chapter]