Search results for "{{ search.query }}"

No results found for "{{search.query}}". 
View All Results

Analyze a video frame stream

classdoc: FrameDetector [c++]

The FrameDetector tracks expressions in a sequence of real-time frames. It expects each frame to have a timestamp that indicates the time the frame was captured. The timestamps arrive in an increasing order. The FrameDetector will detect a face in an frame and deliver information on it to you, including the facial expressions.

1. Create the detector

The FrameDetector constructor expects four parameters { bufferSize, processFrameRate, maxNumFaces and faceConfig }

FrameDetector(
              /**
                The number of frames to hold in the internal frame buffer for processing
                If the buffer becomes full because processing cannot keep up with the supply of frames,
                the oldest unprocessed frame is dropped.
              */
              int bufferSize,

              /**
                The maximum number of frames processed per second
                If not specified, DEFAULT_PROCESSING_FRAMERATE=30
              */
              float processFrameRate,

              /**
                The maximum number of faces to track
                If not specified, DEFAULT_MAX_NUM_FACES=1
              */
              unsigned int maxNumFaces,

              /**
                Face detector configuration - If not specified, defaults to FaceDetectorMode.LARGE_FACES
                  FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
                  FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
              */
              FaceDetectorMode faceConfig
);

Here's an example:

affdex::FrameDetector detector(2);

2. Configure the detector

In order to initialize the detector, a valid location of the data folder must be specified:

Data folder
The Affdex classifier data files are used in frame analysis processing. These files are supplied as part of the SDK. The location of the data files on the physical storage needs to be passed to a detector in order to initialize it by calling the following with the fully qualified path to the folder containing them:

std::string classifierPath="/home/abdo/affdex-sdk/data"
detector.setClassifierPath(classifierPath);

3. Configure the callback functions

The Detectors use callback functions defined in interface classes to communicate events and results. The event listeners need to be initialized before the detector is started: The FaceListener is a client callback interface which sends notification when the detector has started or stopped tracking a face. Call setFaceListener to set the FaceListener:

classdoc: FaceListener [c++]

class MyApp : public affdex::FaceListener {
public:
  MyApp() {
    detector.setFaceListener(this);
  }

private:
  affdex::Detector detector;
};

The ImageListener is a client callback interface which delivers information about an image which has been handled by the Detector. Call setImageListener to set the ImageListener:

classdoc: ImageListener [c++]

class MyApp : public affdex::ImageListener {
public:
  MyApp() {
    detector.setImageListener(this);
  }

private:
  affdex::Detector detector;
};

The ProcessStatusListener is a callback interface which provides information regarding the processing state of the detector. Call setProcessStatusListener to set the ProcessStatusListener:

classdoc: ProcessStatusListener [c++]

class MyApp : public affdex::ProcessStatusListener {
public:
  MyApp() {
    detector.setProcessStatusListener(this);
  }

private:
  affdex::Detector detector;
};

4. Choose the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile and joy classifiers:

detector.setDetectSmile(true);
detector.setDetectJoy(true);

To turn on or off the detection of all expressions, emotions, emojis, or appearances:

detector.setDetectAllExpressions(true);
detector.setDetectAllEmotions(true);
detector.setDetectAllEmojis(true);
detector.setDetectAllAppearances(true);

To check the status of a classifier at any time, for example smile:

detector.getDetectSmile();

5. Initialize the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

6. Process the frame

After successfully initializing the detector using the start method. The frames can be passed to the detector by calling the process method. The process method expects a Frame

classdoc: Frame [c++]

detector.process(Frame frame);

The FrameDetector uses the timestamp field of the Frame to keep track of time. Therefore, make sure it is set to a positive number that increases with each subsequent frame passed for processing.

6. Stop the detector

At the end of the interaction with the detection. Stopping the detector can be done as follows:

detector.stop();

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();

Analyze a video frame stream

classdoc: FrameDetector [c++]