firecam is a Control, Video Display, Video Filtering and Object Recognition/Tracking application for IEEE1394/IIDC-1.31 compliant Cameras. firecam can select Standard and Format-7 Video Modes and Color Codings, Frame Rate and Isochronous Speed. It can also set the Value of Features, Color conversion (Bayer -> Color etc) and Start-Stop video transmission.
firecam has a number of Video Filters that can recognize Objects by a number of characteristics like Motion, Contrast, Color Gradient etc. Regions of pixels separated by a filter can be bound into a single Target by Connectivity filters. These Targets can then be tracked by rotation of the Camera by USB-controlled Az-El Servos. firecam has a fairly effective Target Management system for keeping track of different Objects identified by a Video Filter and selecting another Target when the one being tracked is not detectable any more.
Support of IEEE1394/IIDC-1.31 compliant Cameras
firecam supports IEEE1394/IIDC-1.31 compliant Cameras directly, through the "raw1394" API. This means that raw1394 support must be enabled in the Linux kernel configuration, either as a built-in feature in the kernel or as a loadable module. Camera control is based on the IIDC-1.31 Firewire Digital Camera protocol. Most Firewire cameras comply with this standard, as do some USB cameras that support IIDC-over-USB. firecam currently does not support this new protocol.
Control of Camera
Through the raw1394 API, firecam can set the Standard Mode and Color Coding of the Camera as well as the Frame Rate. In Format-7, if available, the Region of Interest Size and Position, the Color Coding and Frame Rate can also be set. The Isochronous transmission speed can be selected from a drop-down combo-box menu
firecam has functions to display Bayer-tile coded video in RGB color, Greyscale (Mono) or Raw (e.g. display the Bayer tile itself) in a frame in the firecam window. Camera features (Brightness, Contrast, Exposure, White Balance etc) can be controlled manually or they can be set to Automatic control. The video stream can be started and stopped by a toggle button and the Camera can be reset, if this feature is available.Display of Camera and Filter video output
firecam has a number of Video Filters for the detection of "Targets" (Objects) within the field of view of the Camera. These filters include a Motion filter, a Color Gradient filter, a Contrast filter, an Edge Detector etc. These filters identify pixels in the incoming video buffer that match the criteria of the filter, and mark them in a separate video buffer as white pixels in a black background. The filtered video is displayed in a separate frame in the firecam window so that detected objects can be seen.
firecam has some Connectivity filters that can "bind" filtered pixels into individual Targets (objects) for tracking. Currently the available Connectivity filters are 8-Connectivity, Borderline and Pixel Runs. When a Connectivity filter is used, pixels bound into an object are colored with different colors for each object, for clarity.
Tracking of Targets by USB
firecam can track a single "Target", selected from the group of objects detected by the video filtering functions. This is done by rotating the Camera in both Azimuth and Elevation through a pan-tilt mount, controlled by a USB PhidgetServo card. This card can accept Azimuth and Elevation commands from firecam via built-in, libusb-1.0 based, interface functions, and it can then control the pan-tilt servos so that the selected Target is maintained in the center of the camera's field of vision.
firecam has Target Management functions which allow the user to define a number of parameters that govern which objects, detected by the Video Filter and Connectivity function, are classified as Targets an also how they are tracked. For example, there are limits that can be set on the size of a Target, its aspect ratio, the minimum size of the background around it etc.
Please note that I use Arch Linux AMD64 which is a "bleeding edge" type distribution, so there may be compilation and/or run time difficulties if you are using a relatively old distro. This is mostly true of the basic dependencies like GTK+ 2 and Glade-2, and the availability of the "raw1394" driver. This is normally a module and it may not be loaded by default. firecam uses the libusb-1.0 API to control the pan-tilt servos.
To compile the package, it may be preferable to first run the included "autogen.sh" script in the package's top directory, to produce a fresh build environment. Then the "configure" script can be run with optional parameters to override the default settings and compiler flags, e.g: ./configure --prefix=/usr CFLAGS="-g -O2" will override the default /usr/local installation prefix and the "-Wall -O2" compiler flags.
Running "make" in the package's top directory should produce the executable binary in src/. Running "make install" will install the binary into /usr/local/bin by default or under the specified prefix. It will also install the default configuration file into the user's home directory. This will have to be edited by the user as required. There is also this hypertext documentation file which you can copy to a location of your choice.
firecam prints out a lot of information on the Firewire camera it detects on the IEEE1394 bus so, if started in an X terminal, a lot of detail on the camera capabilities, video formats etc can be obtained. firecam also prints details of the video mode (size, color coding etc) whenever the video mode or other parameter is changed. When the USB PhidgetServo card is enabled for pan-tilt control, firecam prints some relevant information as well. If all these detailed information is not needed, firecam output can be redirected to /dev/null: firecam > /dev/null. This command should also be used if firecam is started from a desktop application launcher applet.
Main Window of
firecam's Main Window has a fairly large number of widgets for Camera control, video Filter selection, Target management and Tracking. The screen shot below shows firecam receiving and displaying a video stream from the Camera. The subject in the camera's field is a small bookshelf on the wall of my room, with small "targets" made of pieces of black tape attached to its surface. The video stream from the Camera is displayed in the "Camera Output" frame at the top left corner of the Window. The "Filter Output" frame at the top right of the window displays the video output from the video Filter, if one is selected. In this case "None Selected" is set in the "First Video Processing Filter" combo box so the frame is blank.
In the "Camera Control" frame at the lower left of the Main Window there are two drop-down combo box menus for selecting the camera's video mode. The choices presented in these menus are the ones offered by the Camera, e.g. firecam queries the Camera for all its relevant capabilities and displays them in the appropriate menu. The "Standard Modes & Color Codings" menu allows selection of the video size and color coding for the IIDC "standard" modes (Format 1, Format 2 etc). The "Format-7 Modes & Color Codings" combo box menu allows the selection of a Format-7 video mode, if available. When a Mode is selected, a pop-up dialog box opens to allow selection of the Region of Interest, e.g. the Image Height and Width, Left and Top position of the Region of Interest, the Color Coding and Frame Rate.
Below and left of the Mode selection combo boxes, the "Frame Rate" combo box menu allows the selection of the Frame Rate for the Standard video Modes (Format-7 modes have the frame rate selection in the pop-up dialog box). At the right of Frame Rate combo box the "Iso Tx Speed" menu allows selection of the Isochronous Transmission speed. Normally a speed of 400 Mb/s is selected, as lower speeds may not be sufficient for all modes.
The "Set Feature Element Value" combo box menu allows the setting of available "Features" in the Camera, e.g. Brightness, Contrast, Exposure, Pan & Tilt (Format-7) etc. Selecting a Feature from the menu opens a Dialog Box suitable for setting the value of this Feature manually, as well as activating the "One Push" control, if available. Also it is possible to set the feature to "Off (Fixed)" or "Auto" control (by the Camera itself).
At the bottom left of the "Camera Control" the "Start" toggle button can be used to start and stop Isochronous transmission of the video stream. The "Reset" button can initiate Camera reset, if such a feature is available. Please note though, that a reset can sometimes cause firecam to "freeze" and necessitate exit and restart. Finally, at the bottom right of the "Camera Control" frame the "Conversion" menu allows the selection of a video Conversion function, to convert incoming video to a form (RGB8) suitable for display in a GTK2 DrawingArea widget. If "No Conversion" is selected, incoming video is converted (pixel to pixel) to RGB8 Greyscale and displayed as Monochrome video. The other selections allow conversion of Bayer tiled video to RGB8 Monochrome, to raw Bayer tile display and to extrapolated RGB8 color video.
The "Video Processing" frame in the lower center part of the Main Window contains some widgets, which can be used to control processing of the incoming video stream. firecam has a number of Video Processing Filters that can be used to detect objects in the Camera's field, according to some characteristic quality of the objects. These filters are selectable from the "First Video Processing Filter" combo box menu.
Below the combo box above, there are two sliders that can be used to adjust object detection parameters or thresholds. Then there is the "Second Video Processing Filter" selection menu with its own two sliders, used to specify a second filter that can be cascaded behind the first one. In this way the video output from the first filter can be re-filtered by the second filter for a stricter separation of "Targets" (objects). Currently only the motion filter is available as the Second filter. Finally, the two lowest combo box menus in this frame allow the selection of the "active" pixel in the incoming video stream and the Connectivity Filter. The "Active Pixels" menu has the following items:
All four pixels of a Bayer tiled video stream are used by the Video Filter. This means that the luminance value of each Bayer tile's pixels are summed and averaged into a single Greyscale pixel per tile, which is then used by the Filter. This makes White the most prominent color in the incoming video stream. Note that this is done only by the filters that take a Monochrome (Greyscale) video stream as input.
The luminance value of the Red pixels in each Bayer tile is used to form a Monochrome video stream for the Filter. This makes Red the most prominent color in the incoming video stream. Again, this is done only by the filters that take a Monochrome (Greyscale) video stream as input. The same is also true for the rest of the menu items.
The luminance value of the Green pixels in each Bayer tile is used to form a Monochrome video stream for the Filter. This makes Green the most prominent color in the incoming video stream.
The luminance value of the Blue pixels in each Bayer tile is used to form a Monochrome video stream for the Filter. This makes Blue the most prominent color in the incoming video stream.
The luminance value of the Blue and Green pixels in each Bayer tile is summed and averaged to form a Monochrome video stream for the Filter. This makes Cyan the most prominent color in the incoming video stream.
The "Connectivity" combo box menu allows the selection of a Connectivity filter that follows the Video Processing Filter. The Connectivity filter "binds" nearby pixels in the Processing Filter's video output so that they form a single "Target" (object). This is needed for the Target Tracking functions available in firecam and for showing individual Targets in different colors in the filtered video display. The following Connectivity filters are available:
This filter implements a simple 8-connectivity algorithm to bind neighboring pixels into a single object object.
This filter implements a simple "snake" algorithm to identify a border that encloses nearby pixels. Only the border pixels of each object are then shown in the filtered video display.
This filter first identifies horizontal "pixel runs", e.g. the maximum sequence of "white" pixels in a line of the filtered video, that have no more than one "black" pixel between them. Then it identifies pixel runs that are no more than one line apart vertically, provided their edges are no more pixel apart horizontally. Connected pixel runs are then bound into one single object. This filter seems to be somewhat more efficient than the others as it only examines individual pixels in horizontal lines.
The screen shot below shows firecam using the "Contrast Islet" filter and the "Pixel Runs" connectivity function to identify the small black patches on the bookshelf surface as individual Targets (objects). The Target Tracking functions are also in use, pointing the Camera to track a selected target (in the red cross-thread) and center it with the larger green cross-thread that signifies the center of the Camera field. Some of the black patches appear not to be detected, but this is because their size and the lighting conditions put them just on the threshold of detection. The result is intermittent detection so that at the moment of the screen capture some target are not marked be a cross thread. The camera is rotated in azimuth and elevation by the USB-controlled Pan-Tilt mounting.
Video Processing filter
Motion Detection (Mono8 and Raw8):
These filters identify pixels that move against a static background. The input can be a Mono8 (8-bit Greyscale) or Raw8 (Bayer tile) video stream, as per the filter selected.
Color Gradient (Raw8 or RGB8): These filters identify pixels that have a R/G/B ratios close to specified values. The "Fast" filter uses a simpler method to calculate RGB proportions. The input can be a Raw8 (Bayer tile) or a RGB8 color video stream. The required R/G/B ratios are specified by a left-click on the desired Target in the Camera video display, whereby firecam will read the colors under the pointer and use them in the filters.
Histogram (Mono8, Raw8 or RGB8):
These filters make a Histogram of pixel values for each frame of incoming video, then identify pixels whose value is within a range of frequency of occurrence. The Mono8 and Raw8 filters use only the luminance value of the incoming video's pixels, while the RGB8 filter uses color proportion.
This filter is not suitable for detecting targets and it cannot be followed by a Connectivity filter. It was only an experiment but it is sometimes useful in identifying luminance border lines in the video stream.
Edge Detection (Raw8):
These filters identify objects by their edge (step in luminance value). The "Fast" filter uses a simpler method for a small increase in speed.
Contrast Islet (Raw8): This filter identifies a collection of pixels that have a fairly large difference of luminance from the surrounding backround, resembling a small island ("islet") in a "sea" of background pixels. This filter is probably the most useful in identifying objects in a relatively clear background, like birds in the sky or boats in the sea. Indeed there was a proposal to adapt firecam, to be used for tracking birds with a video camera for wildlife photography.
Targets (Objects) detected by the Video and Connectivity Filters can be tracked physically by rotating the Camera in Azimuth and Elevation, using the USB-controlled Pan-Tilt mounting. The Target Tracking functions of firecam have been developed for the SPT200 Pan-Tilt mounting from ServoCity, whose servos are controlled by the USB PhidgetServo 4-Servo control card from Phidgets Inc. But please note that the PhidgetServo card has now been superseded by a newer version, whose compatibility with firecam's current built-in tracking functions I cannot verify.
The tracking functions of firecam are controlled from various widgets, in frames that are in the right side of firecam's Main window. at the top right, the "Filter Output" frame displays the output from the video processing functions. If no Connectivity filter is selected, the pixels detected by the Video Processing filter will appear in white, against a black background. With a Connectivity filter selected, the output of this will appear in different colors for each connected group of pixels (for each Target), against a black background. Please note that with no Connectivity filter selected, target tracking is not possible because Object pixels are not "bound" into Targets. Here is a description of the Tracking controls:
The Max Target Size slider specifies the maximum acceptable size of a target, as a percentage of the Filtered Video size. This sets an upper limit to the size of an Object that can be treated as a Target.
The Max Aspect Ratio slider sets an upper limit to the aspect ratio of the bounding box that encloses an Object, above which an Object will not be treated as a Target. This is useful in preventing object like telephone poles or power lines from being treated as a valid Target.
The Min Target Size slider specifies the minimum size (in pixels) of an Object, below which it will not be treated as a Target. This is useful for rejecting specs appearing in the Camera's video stream for some reason.
The Min Background size slider specifies the minimum size of the background pixels, surrounding an object, before it is treated as a Target. This size is a percentage of the filtered video size.
The Maximum Targets combo box menu can be used to specify the maximum number of Objects that are treated as Targets. The count of Objects is from left to right, top to bottom of the filtered video frame.
The Speed Convergence slider sets a factor that determines how fast the measured angular speed of a Target (The speed with which it moves in the Camera's field plus the rotation speed of the Camera pan-tilt mounting) is averaged. Higher values make the average of the measured speed converge slower.
The Target Auto Re-acquisition Priority combo box menu allows selection of the criteria for acquiring a new Target when the one being tracked drops out of detection for some reason.
"Nearest Speed in Cross-Hair" will make the tracking functions select a Target, within the green cross-hair circle, whose angular speed is nearest to the previously tracked Target's speed.
"Nearest Target in Cross-Hair" will make the tracking functions select a Target, within the green cross-hair circle, which is nearest to the center of the green cross-hair circle.
"Nearest Speed" will make the tracking functions select any Target whose angular speed is nearest to the previously tracked Target's speed.
"Nearest Target" will make the tracking functions select
any Target which is nearest to the green cross-hair center.
If firecam finds itself with no targets to select, when the currently tracked target is gone, then it will continue rotating the Camera in the same direction and with the same speed it was using at the moment the last Target was lost. If a Target appears that matches the criteria set in the above combo box menu, firecam will resume tracking this Target.
This frame contains some widget for starting/stopping the Target Tracking function of firecam, enabling the USB PhidgetServo card adjusting the position of the Camera.
The "Select Target" indicator shows a green "LED" icon when tracking is enabled by clicking on a Target in the Camera video display (upper left frame).
The "Keep Watch" toggle button instructs firecam to look out for the appearance of a Target in the Camera's field. The first Object that will be identified as a Target by the video filters will be selected for tracking and it will be centered and followed by the tracking functions, by controlling the Pan-Tilt mounting via the PhidgetServo card.
The "Acquire Target" button will signal the tracking functions to re-lock on the first Target, searching from left to right and top to bottom of the video display. This is useful when firecam loses its lock on a Target and "wonders" away.
The "Stop Tracking" button disables and resets the tracking functions of firecam.
The "Azimuth Servo Control" widgets enable the activation of the Azimuth Servo, the setting of the Azimuth position demand (in degrees) and the reset of the Azimuth position to the "Home" default value.
The "Elevation Servo Control" widgets offer the same controls, as above, for the Elevation Servo.
The Camera and its Pan-Tilt Mounting: The camera used during firecam's development is the Point Grey Research Firefly MV FFMV-03MTC, which is a IEEE1394/IIDC-1.31 compliant unit. This camera is simply connected to an available Firewire port on the computer and it is controlled by firecam using the raw1394 interface API.
The Pan-Tilt mounting is the SPT200 assembly, supplied by ServoCity. The servos used with the SPT200 are the HSR5995TG by HiTech. Other similar servos can be used, also different Pan-Tilt assemblies can be employed if the servos they are equipped with can be controlled by firecam.
The servos are controlled by firecam via a PhidgetServo card, which is a USB card supplied by Phidgets Inc and which is capable of controlling the position of compatible servos with high accuracy. Please Note though, the PhidgetServo card has been superseded by a new model, which I expect is not compatible with firecam's built-in driver functions.
For my application, I installed the PhidgetServo card into a cast aluminum box, together with a rechargeable battery pack as a power supply for the card and servos. The SPT200 pan-tilt assembly is mounted on top of the box, with the Azimuth servo inside the box. The elevation servo mounts inside the tilt assembly of the SPT200. Both servos's control wires are brought inside the box and connected to the PhidgetServo card. firecam controls this card with its own built-in driver functions, using libusb-1.0. Phidgets Inc supplies a driver library for all its USB cards but I referred to write my own user-space drivers so that I could integrate them with the tracking functions.
This is a photo of the Firefly MV camera mounted on the SPT200 Pan-Tilt assembly. At the bottom is a cast aluminum box that contains a rechargable battery, to supply power to PhidgetServo card and the HS5995TG servos. The red and black cable at the top left of the box can be used to supply power to the camera, if the assembly is used with a laptop that has no power pins in the Firewire socket.
The connector at the front of the box is for the USB control cable. The Firewire cable is seen at the back of the camera. At the right side of the box there is an ON-OFF switch, a charger socket and two LED's, one is a Charge indicator and the other a POWER-ON indicator.
The azimuth servo for the SPT200 is in the box, whereas the elevation servo is within the SPT200 assembly. The Firefly MV is mounted in an aluminum casing and is fitted with a C type lens.
5. Bugs and annoyances:
firecam is my first attempt at writing code for a Firewire camera and a USB gadget. I had to learn the raw1394 interface API and the libusb-1.0 API. I also had to read a lot of material on image processing to be able to write code for video processing and object recognition and tracking. For these reasons, firecam is probably rather simplistic and limited, and it is possible that there are undetected bugs in the code waiting for the right circumstances to make an appearance. Therefore I believe firecam should be considered an experimental application.
6. Version History
Version 0.1-beta: First beta release of firecam.
Version 0.2-beta: I made extensive modifications to the source code to silence a large number of warnings generated by the LLVM clang compiler when used with the -Weverything option. These were mostly cases of implicit conversions between variable types, like int to char or uint to int etc. In the process I made a few changes to the functionality in some functions, to improve reception of images over the firewire link. I also changed the type of functions that return errors, from int to gboolean, and modified the handling of error conditions were needed.
Version 0.3: I have updated the basic files of the GNU Autotools build system, to be compatible with the current version of these tools at the time of writing (February 2013).
7. Copying: This software package is released under the GNU Public License. Please see the COPYING file for more details.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.