第2 章 使用 ros opencv...

31
2 使用 ROSOpenCV Dynamixel 伺服舵机进行人脸检测与跟踪 人脸检测与跟踪是大多数服务和社交机器人的功能之一。这些机器人可以识别人脸并 且根据人脸的运动而转动。网络上有大量人脸检测与跟踪的实现方法。多数跟踪器都具备 一个固定在顶部位置的摄像头和一个可以水平和垂直转动的云台。在本章,我们将学习一 个使用能够水平转动的云台实现简单跟踪的示例。 我们使用一个固定在 AX-12 Dynamixel 伺服舵机上的 USB 网络摄像头实现这个功能。 ROS 中完成 Dynamixel 伺服舵机的控制和图像处理功能。 本章将涵盖如下主题: 项目概述 硬件和软件需求 配置 Dynamixel AX-12 型伺服舵机 项目框图 连接 ROS Dynamixel 伺服舵机 创建用于人脸识别和控制的 ROS 功能包 ROS-OpenCV 接口 实现人脸跟踪与人脸跟踪控制器 最终运行 2.1 项目概述 本项目的目标是构建一个简单的、能够沿着水平轴跟踪人脸的伺服跟踪器。人脸跟踪 器的硬件由网络摄像头、Dynamixel AX-12 型伺服舵机和在舵机上固定网络摄像头的支架组 成。伺服跟踪器将跟随人脸运动直至人脸处于网络摄像头图像的中心。一旦人脸到达中心, 伺服跟踪器将停止运动,并等待人脸运动。人脸检测将使用 OpenCV ROS 接口来完成, 伺服驱动器的控制将在 ROS 中应用 Dynamixel 伺服舵机的驱动实现。

Upload: others

Post on 27-Jun-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

第 2 章

使用 ROS、OpenCV 和 Dynamixel伺服舵机进行人脸检测与跟踪

人脸检测与跟踪是大多数服务和社交机器人的功能之一。这些机器人可以识别人脸并

且根据人脸的运动而转动。网络上有大量人脸检测与跟踪的实现方法。多数跟踪器都具备

一个固定在顶部位置的摄像头和一个可以水平和垂直转动的云台。在本章,我们将学习一

个使用能够水平转动的云台实现简单跟踪的示例。

我们使用一个固定在 AX-12 Dynamixel 伺服舵机上的 USB 网络摄像头实现这个功能。

在 ROS 中完成 Dynamixel 伺服舵机的控制和图像处理功能。

本章将涵盖如下主题:

● 项目概述

● 硬件和软件需求

● 配置 Dynamixel AX-12 型伺服舵机

● 项目框图

● 连接 ROS 与 Dynamixel 伺服舵机

● 创建用于人脸识别和控制的 ROS 功能包

● ROS-OpenCV 接口

● 实现人脸跟踪与人脸跟踪控制器

● 最终运行

2.1 项目概述

本项目的目标是构建一个简单的、能够沿着水平轴跟踪人脸的伺服跟踪器。人脸跟踪

器的硬件由网络摄像头、Dynamixel AX-12 型伺服舵机和在舵机上固定网络摄像头的支架组

成。伺服跟踪器将跟随人脸运动直至人脸处于网络摄像头图像的中心。一旦人脸到达中心,

伺服跟踪器将停止运动,并等待人脸运动。人脸检测将使用 OpenCV 和 ROS 接口来完成,

伺服驱动器的控制将在 ROS 中应用 Dynamixel 伺服舵机的驱动实现。

Page 2: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

24  第 2 章

为完成这个跟踪系统,需创建两个 ROS 功能包:一个用于人脸检测并找到人脸的中心,

另一个用于传送指令到伺服舵机,并根据中心点的位置跟踪人脸。

一切都清楚了,下面我们将开始讲述这个项目的硬件与软件需求。

本项目的完整代码可以从如下 Git 资源库里下载。使用下述指令克隆项目资源:

$ git clone https://github.com/qboticslabs/ros_robotics_projects

2.2 硬件和软件需求

下表是实现本项目的硬件组件列表,从中也可以获取每个组件的大致价格和购买链接。

硬件组件列表:

编号 组件名称 大致价格 (美元) 购买链接

1 网络摄像头 32 https://amzn.com/B003LVZO8S

2带固定支架的 AX-12ADynamixel 伺服舵机

76 https://amzn.com/B0051OXJXU

3用于 Dynamixel 伺服舵

机的 USB 适配器50

http://www.robotshop.com/en/robotis-usb-to-dynamixel-adapter.html

4用于 AX-12 舵机的三芯

电缆12

ht tp : / /www.t rossenrobot ics .com/p/100mm-3-Pin-DYNAMIXEL-Compatible-Cable-10-Pack

5 电源适配器 5 https://amzn.com/B005JRGOCM

6用 于 AX 和 MX 系 列 的

六端口集线器5 http://www.trossenrobotics.com/6-port-ax-mx-power-hub

7 USB 延长电缆 1 https://amzn.com/B00YBKA5Z0

含税和运费的全部价格 大约 190~200

链接和价格可能会发生变化。如果链接失效,请使用谷歌搜索完成此工作。运费和税

费并没有包含在价格中。

如果你觉得上述价格无法承担,还有一些用于完成此项目的更为廉价的替代方案。项

目的核心部件是 Dynamixel 伺服舵机,可以用一个价格仅为 10 美元左右的 RC 伺服电机替

代,以及一个大约 20 美元用于控制伺服电机的 Arduino 开发板替代。ROS 和 Arduino 接口

将在后面的章节中讲述。因此你可以考虑使用 Arduino 和 RC 伺服电机完成人脸跟踪项目。

接下来看一下本项目的软件需求。需求列表包括 ROS 框架、操作系统版本和 ROS 功

能包。

Page 3: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  25

编号 软件名称 大致价格(美元) 下载链接

1 Ubuntu 16.04 LTS 免费 http://releases.ubuntu.com/16.04/

2 ROS Kinetic LTS 免费 http://wiki.ros.org/kinetic/Installation/Ubuntu

3 ROS usb_cam 包 免费 http://wiki.ros.org/usb_cam

4 ROS cv_bridge 包 免费 http://wiki.ros.org/cv_bridge

5 ROS Dynamixel controller 免费 https://github.com/arebgun/dynamixel_motor

6 Windows 7 或更高版本 大约 120 美元https://www.microsoft.com/en-in/software-download/

windows7

7 RoboPlus(Windows 应用) 免费http://www.robotis.com/download/software/Robo-

PlusWeb%28v1.1.3.0%29.exe

上表列出了将在本项目中使用的软件。在完成本项目的时候需要用到 Windows 和

Ubuntu 软件。如果电脑上有双操作系统就更好了!

首先看一下如何安装所有的软件。

安装 ROS 相关功能包

我们已经安装并配置好了 Ubuntu 16.04 和 ROS Kinetic。现在来了解一下完成本项目需

要的功能包。

安装 USB_cam ROS 功能包

首先看一下 ROS 中 usb_cam 功能包的使用。usb_cam 功能包是 ROS 为支持 Video4-Linux (V4L) USB 摄像头提供的一个 ROS 驱动程序。V4L 是 Linux 中一个用于从网络摄像

头采集实时图像的设备驱动集。ROS 中的 usb_cam 功能包使用 V4L 设备并发布视频流作为

ROS 图像信息。我们可以订阅该信息,并使用该信息完成需要处理的功能。在上一个列表

中已经给出此功能包的 ROS 官方下载页面。你可以查看网址并使用该功能包所提供的不同

设定和配置。

(1)创建依赖所需的 ROS 工作区

在安装 usb_cam 功能包之前,先创建一个 ROS 工作区用于存储本书所提到的所有项

目,同时可以创建另一个工作区用于保存项目代码。

在 home 文件夹中创建一个名为 ros_project_dependencies_ws 的 ROS 工作区。把 usb_cam 功能包复制进 src 文件夹里:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 43 ]

This table gives you an idea of the software we are going to be using for this project. Wemay need both Windows and Ubuntu for doing this project. It will be great if you have dualoperating systems on your computer.

Let's see how to install all this software first.

Installing dependent ROS packagesWe have already installed and configured Ubuntu 16.04 and ROS Kinetic. Let's look at thedependent packages we need to install for this project.

Installing the usb_cam ROS packageLet's look at the use of the usb_cam package in ROS first. The usb_cam package is the ROSdriver for Video4Linux (V4L) USB cameras. V4L is a collection of device drivers in Linuxfor real-time video capture from webcams. The usb_cam ROS package works using V4Ldevices and publishes the video stream from devices as ROS image messages. We cansubscribe to it and perform our own processing using it. The official ROS page of thispackage is given in the previous table. You can check out this page for different settings andconfigurations this package offers.

Creating a ROS workspace for dependenciesBefore starting to install the usb_cam package, let's create a ROS workspace for storing thedependencies of all the projects mentioned in the book. We can create another workspacefor keeping the project code.

Create a ROS workspace called ros_project_dependencies_ws in the home folder.Clone the usb_cam package into the src folder:

$ git clone https://github.com/bosch-ros-pkg/usb_cam.git

Build the workspace using catkin_make.

After building the package, install the v4l-util Ubuntu package. It is a collection ofcommand-line V4L utilities used by the usb_cam package:

$ sudo apt-get install v4l-utils

使用 catkin_make 编译工作区。

在生成功能包后,安装 v4l-utilUbuntu 工具包。使用 V4L 命令工具集的 usb_cam 功能

包命令:

Page 4: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

26  第 2 章

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 43 ]

This table gives you an idea of the software we are going to be using for this project. Wemay need both Windows and Ubuntu for doing this project. It will be great if you have dualoperating systems on your computer.

Let's see how to install all this software first.

Installing dependent ROS packagesWe have already installed and configured Ubuntu 16.04 and ROS Kinetic. Let's look at thedependent packages we need to install for this project.

Installing the usb_cam ROS packageLet's look at the use of the usb_cam package in ROS first. The usb_cam package is the ROSdriver for Video4Linux (V4L) USB cameras. V4L is a collection of device drivers in Linuxfor real-time video capture from webcams. The usb_cam ROS package works using V4Ldevices and publishes the video stream from devices as ROS image messages. We cansubscribe to it and perform our own processing using it. The official ROS page of thispackage is given in the previous table. You can check out this page for different settings andconfigurations this package offers.

Creating a ROS workspace for dependenciesBefore starting to install the usb_cam package, let's create a ROS workspace for storing thedependencies of all the projects mentioned in the book. We can create another workspacefor keeping the project code.

Create a ROS workspace called ros_project_dependencies_ws in the home folder.Clone the usb_cam package into the src folder:

$ git clone https://github.com/bosch-ros-pkg/usb_cam.git

Build the workspace using catkin_make.

After building the package, install the v4l-util Ubuntu package. It is a collection ofcommand-line V4L utilities used by the usb_cam package:

$ sudo apt-get install v4l-utils

(2)在 Ubuntu 16.04 上配置网络摄像头 webcam在安装完上面两个包后,我们把网络摄像头连接到 PC 上,并检查它是否可以被 PC 正

确检测到。

打开一个终端,并执行 dmesg 命令检查日志记录。如果摄像头可以在 Linux 中检测到,

会有类似图 2-1 所示的日志记录。

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 44 ]

Configuring a webcam on Ubuntu 16.04

After installing these two, we can connect the webcam to the PC to check whether it isproperly detected by our PC.

Open a Terminal and execute the dmesg command to check the kernel logs. If your camerais detected in Linux, it may give you logs like this:

$ dmesg

Figure 1: Kernels logs of the webcam device

You can use any webcam that has driver support in Linux. In this project, an iBall Face2Face(http://www.iball.co.in/Product/Face2Face-C8-0–Rev-3-0-/90) webcam is used fortracking. You can also go for the popular Logitech C310 webcam mentioned as a hardwareprerequisite. You can opt for that for better performance and tracking.

If our webcam has support in Ubuntu, we can open the video device using a tool calledCheese. Cheese is simply a webcam viewer.

Enter the command cheese in the Terminal. If it is not installed, you can install it usingthe following command:

$ sudo apt-get install cheese

图 2-1 网络摄像头设备的日志记录

你能够使用任何一个 Linux 支持的网络摄像头。在此项目中,我们使用了一个名为

iBall Face2Face (http://www.iball.co.in/Product/Face2Face-C8-0-Rev-3-0-/90) 的网络摄像头实

现人脸跟踪。

你可以选择更好的设备以获得更好的性能和跟踪效果,如前面硬件需求里面提到的主

流网络摄像头罗技 C310。如果为 Ubuntu 所支持的网络摄像头,那就可以使用 Cheese 工具打开视频设备。

Cheese 是一个简单的网络摄像头查看器。

在终端中键入 cheese 命令即可。如果没有安装这个工具,可以使用如下命令安装:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 44 ]

Configuring a webcam on Ubuntu 16.04

After installing these two, we can connect the webcam to the PC to check whether it isproperly detected by our PC.

Open a Terminal and execute the dmesg command to check the kernel logs. If your camerais detected in Linux, it may give you logs like this:

$ dmesg

Figure 1: Kernels logs of the webcam device

You can use any webcam that has driver support in Linux. In this project, an iBall Face2Face(http://www.iball.co.in/Product/Face2Face-C8-0–Rev-3-0-/90) webcam is used fortracking. You can also go for the popular Logitech C310 webcam mentioned as a hardwareprerequisite. You can opt for that for better performance and tracking.

If our webcam has support in Ubuntu, we can open the video device using a tool calledCheese. Cheese is simply a webcam viewer.

Enter the command cheese in the Terminal. If it is not installed, you can install it usingthe following command:

$ sudo apt-get install cheese

如果驱动和设备是正常的,你将能够从网络摄像头获得一个如图 2-2 所示的视频流。

恭喜!你的网络摄像头在 Ubuntu 里已经能够很好地运行了。那么这时全部准备工作已

经完成了吗?还没有。下一步将在 ROS 中测试 usb_cam 功能包,需要确定它在 ROS 中可

以正常运行。

此项目的完整源代码可以从下面的 Git 资源库里克隆得到。下列命令将用于克隆项

目资源:

$ git clone https://github.com/qboticslabs/ros_robotics_projects

Page 5: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing
Page 6: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

28  第 2 章

图 2-4 usb_cam 节点正在发布的主题

接下来,将了解如何配置 Dynamixel AX-12A 伺服舵机。

(4)使用 RoboPlus 配置 Dynamixel 伺服舵机

这款 Dynamixel 伺服舵机可以使用 RoboPlus 程序配置,该程序由 Dynamixel 伺服舵机

制造商 ROBOTIS INC 提供(http://en.robotis.com/index/)。

配置 Dynamixel 伺服舵机时,需要将操作系统切换至 Windows,因为 RoboPlus 工具工

作在 Windows 系统下。在此项目中,我们在 Windows 7 下配置该伺服舵机。

链 接 http://www.robotis.com/download/software/RoboPlusWeb%28v1.1.3.0%29.exe 可 以

用于下载 RoboPlus。如果链接已经失效,你可以在谷歌中搜索 RoboPlus 1.1.3。安装软件后,你将看到如

图 2-5 所示的窗口。在软件中打开 Expert 标签页,可以获得配置 Dynamixel 伺服舵机的应用。

图 2-5 RoboPlus 中的 Dynamixel 伺服舵机管理器

Page 7: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  29

在启动 Dynamixel 伺服舵机向导和配置之前,需要先连接 Dynamixel 伺服舵机,并正

确上电。图 2-6 是在项目中使用的 AX-12A Dynamixel 伺服舵机和它的引脚示意图。

(a) (b) (c)

图 2-6 AX-12A Dynamixel 伺服舵机和引脚连接示意图

与其他 RC 伺服电机不一样,AX-12 是一款带有微处理器的智能执行器,能够监控和

定制舵机的所有参数。它通过齿轮传动装置将伺服舵机的输出连接到舵机臂。我们能够将

任何关节连接到舵机臂上。每个舵机后面有两个连接端口。每个端口有电源、地和数据三

个引脚。此 Dynamixel 伺服舵机的端口采用菊链方式,因此我们能够将一个舵机连接到另

一个舵机上。图 2-7 是舵机和计算机的连接示意图。

数据(TTL)

菊链

电源线(9~12 伏)

伺服舵机的USB 适配器

USB

AX-12A DYNAMIXEL

图 2-7 Dynamixel AX-12A 型伺服舵机和连接示意图

将 Dynamixel 伺服舵机和电脑连接到一起的主要硬件组件是 Dynamixel 伺服舵机的

USB 适配器。该适配器是一个 USB 转串口适配器,能够将 USB 端口转为 RS232、RS484和 TTL 信号。在 AX-12 电机里,数据通信通过 TTL 信号来实现。从图 2-7 可以知道,每

个端口有三个引脚。数据引脚用于从 AX-12 接收和发送数据。电源和地引脚用于给伺服舵

机供电。AX-12A Dynamixel 伺服舵机供电电压范围为 9~12V。每一个 Dynamixel 伺服舵

机的第二个端口都具有菊链功能。使用菊链我们最多能够将 254 个舵机连接起来。

Page 8: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

30  第 2 章

AX-12A 和此 Dynamixel 伺服舵机 USB 适配器的官方链接如下。

AX-12A: http://www.trossenrobotics.com/dynamixel-ax-12-robot-actuator.aspxUSB-to-Dynamixel: http://www.trossenrobotics.com/robotis-bioloid-usb2dynamixel.aspx

为了使 Dynamixel 伺服舵机正常工作,需要了解更多的内容。现在来了解一些

Dynamixel AX-12A 伺服舵机的重要性能指标。这些性能指标来自此伺服舵机手册,如

图 2-8 所示。

重量: 54.6 克 (AX-12A)

尺寸: 32mm*50mm*40mm

分辨率: 0.29°

齿轮减速比: 254∶1

堵转转矩: 1.5N·m(在 12V,1.5A 时)

无负载转速: 59rpm(在 12V 时)

操作角度: 0°~300°,无限旋转

操作温度: -5℃~70℃

电压: 9~12V(推荐电压:11.1V)

指令信号: 数据包

协议类型: 半双工异步串口通信(8 位,1 停止位,无奇偶校验)

物理连接: TTL 多点总线(菊花链型连接)

标识号: 254 ID(0~253)

通信速率: 7343bps~1Mbps

反馈 位置、温度、负载、输入电压等

材质: 工程塑料

图 2-8 AX-12A 舵机技术参数

Dynamixel 伺服舵机能够以最快 1Mbps 的速度与 PC 通信。它也能够提供不同参数的

反馈,如位置、温度和当前负载等,不同于 RC 伺服电机,它的最大转角为 300°,并且主

要通过数据包通信。

(5)上电并将 Dynamixel 伺服舵机连接至 PC现在我们将此 Dynamixel 伺服舵机连接至 PC。标准连线如图 2-9 所示。

三芯电缆首先连接到 AX-12 任意一个接口上,另一个接口连接到 6 端口电源集线器

上,并用另一根电缆将集线器和 Dynamixel 伺服舵机 USB 适配器连接到一起。我们需要将

Dynamixel 伺服舵机 USB 适配器设置为 TTL 模式。电源既可以连接到一个 12V 电源适配器

上,也可以连接到电池上。12V 电源适配器采用 2.1×5.5 母音叉接口,因此你在购买的时

Page 9: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  31

候应该选择一个公适配器接口。

12V 适配器

图 2-9 将 Dynamixel 伺服舵机连接至 PC

(6)在 PC 上安装 Dynamixel 伺服舵机的 USB 驱动

我们已经介绍过 Dynamixel 伺服舵机的 USB 适配器,它实际上是一个采用 FTDI 芯片

(http://www.ftdichip.com/)的 USB 串口转换器。我们需要在 PC 上安装正确的 FTDI 驱动程

序用于检测设备。驱动是 Windows 版本的,而非 Linux。因为 FTDI 驱动已经在 Linux 核心

中了。如果你安装了 RoboPlus 软件,驱动程序可能已经随带安装了。如果没有,你可能需

要从 RoboPlus 安装文件夹中手动安装此驱动程序。

把 Dynamixel 伺服舵机 USB 适配器插到 Windows PC 上,然后查看 Device Manager (设备管理器)。(鼠标右键单击 My Computer,选择 Properties | Device Manager)。如果设备

正确检测,将会看到图 2-10 所示的界面。

图 2-10 Dynamixel 伺服舵机 USB 的串行接口

Page 10: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

32  第 2 章

从 Dynamixel 伺服舵机 USB 适配器获取串口,然后在 RoboPlus 软件中启动 Dynamixel伺服舵机管理器。你可以从列表中选择连接的串口号,然后单击 Search(搜索)按钮扫描

Dynamixel 伺服舵机,如图 2-11 所示。

图 2-11 Dynamixel 伺服舵机 USB 适配器串行端口

从列表中选择所用 COM 端口,把连接的端口标记为 1。在连接了 COM 端口后,设置

默认波特率为 1Mbps,并单击 Start searching(开始搜索)按钮。

如果在左边面板获取了伺服舵机列表,表示你的 PC 已经检测到一个 Dynamixel 伺服舵

机。如果伺服舵机没有被检测到,可以通过如下步骤检测:

1)使用万用表确认供电与连接正确,确保上电后伺服舵机后面的 LED 闪烁,这样能

够确定是伺服舵机的问题还是电源的问题。

2)使用 Dynamixel 伺服舵机管理器中的标记 6 更新伺服舵机的防火墙。更新向导在

图 2-12 中给出。当使用向导的时候,可能需要断电,再重新上电,并检测伺服舵机。

3)在检测到伺服舵机后,选择一个伺服模型,并安装一个新的防火墙,这有助于

Dynamixel 伺服舵机管理器检测伺服舵机是否存在伺服防火墙过期的问题。

图 2-12 Dynamixel 伺服舵机恢复向导

如果伺服舵机被列入 Dynamixel 伺服舵机管理器,单击其中一个,你能够看到完整的

Page 11: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  33

配置参数。这时需要为当前人脸跟踪项目修改配置中的一些参数。这些参数包括:

● 标识号:设置标识号为 1 ● 波特率:1 ● 移动速度:100 ● 目标位置:512

修改后的伺服舵机设置如图 2-13 所示。

图 2-13 修改后的伺服舵机防火墙配置

修改这些参数后,你能够通过改变其目标位置(goal position),检查伺服舵机是否正常

工作。

此时,你已经配置好 Dynamixel 伺服舵机了,恭喜!那么下一步做什么呢?我们将在

ROS 中使用 Dynamixel 伺服舵机。

此项目的完整源代码可以从下面的 Git 资源库里克隆得到。下列命令将用于克隆项

目资源:

$ git clone https://github.com/qboticslabs/ros_robotics_projects

2.3 ROS 与 Dynamixel 伺服舵机的接口

如果已成功配置好 Dynamixel 伺服舵机,那么在 Ubuntu 中将 ROS 与 Dynamixel 伺服

舵机连接就很容易实现了。如前所述,不需要在 Ubuntu 上安装 FTDI 驱动程序,因为它已

经在 Ubuntu 的核心里了。我们唯一要做的就是安装 ROS Dynamixel 伺服舵机包。

Page 12: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 56 ]

The ROS Dynamixel packages are available at the following link:

http://wiki.ros.org/dynamixel_motor

You can install the Dynamixel ROS packages using commands we'll look at now.

Installing the ROS dynamixel_motor packagesThe ROS dynamixel_motor package stack is a dependency for the face tracker project, sowe can install it to the ros_project_dependencies_ws ROS workspace.

Open a Terminal and switch to the src folder of the workspace:

$ cd ~/ros_project_dependencies_ws/src

Clone the latest Dynamixel driver packages from GitHub:

$ git clone https://github.com/arebgun/dynamixel_motor

Remember to do a catkin_make to build the entire packages of the Dynamixel driver.

If you can build the workspace without any errors, you are done with meeting thedependencies of this project.

Congratulations! You are done with the installation of the Dynamixel driver packages inROS. We have now met all the dependencies required for the face tracker project.

So let's start working on face tracking project packages.

Creating face tracker ROS packagesLet's start creating a new workspace for keeping the entire ROS project files for this book.You can name the workspace ros_robotics_projects_ws.

Download or clone the source code of the book from GitHub using the following link.

$ git clone https://github.com/qboticslabs/ros_robotics_projects

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 56 ]

The ROS Dynamixel packages are available at the following link:

http://wiki.ros.org/dynamixel_motor

You can install the Dynamixel ROS packages using commands we'll look at now.

Installing the ROS dynamixel_motor packagesThe ROS dynamixel_motor package stack is a dependency for the face tracker project, sowe can install it to the ros_project_dependencies_ws ROS workspace.

Open a Terminal and switch to the src folder of the workspace:

$ cd ~/ros_project_dependencies_ws/src

Clone the latest Dynamixel driver packages from GitHub:

$ git clone https://github.com/arebgun/dynamixel_motor

Remember to do a catkin_make to build the entire packages of the Dynamixel driver.

If you can build the workspace without any errors, you are done with meeting thedependencies of this project.

Congratulations! You are done with the installation of the Dynamixel driver packages inROS. We have now met all the dependencies required for the face tracker project.

So let's start working on face tracking project packages.

Creating face tracker ROS packagesLet's start creating a new workspace for keeping the entire ROS project files for this book.You can name the workspace ros_robotics_projects_ws.

Download or clone the source code of the book from GitHub using the following link.

$ git clone https://github.com/qboticslabs/ros_robotics_projects

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 56 ]

The ROS Dynamixel packages are available at the following link:

http://wiki.ros.org/dynamixel_motor

You can install the Dynamixel ROS packages using commands we'll look at now.

Installing the ROS dynamixel_motor packagesThe ROS dynamixel_motor package stack is a dependency for the face tracker project, sowe can install it to the ros_project_dependencies_ws ROS workspace.

Open a Terminal and switch to the src folder of the workspace:

$ cd ~/ros_project_dependencies_ws/src

Clone the latest Dynamixel driver packages from GitHub:

$ git clone https://github.com/arebgun/dynamixel_motor

Remember to do a catkin_make to build the entire packages of the Dynamixel driver.

If you can build the workspace without any errors, you are done with meeting thedependencies of this project.

Congratulations! You are done with the installation of the Dynamixel driver packages inROS. We have now met all the dependencies required for the face tracker project.

So let's start working on face tracking project packages.

Creating face tracker ROS packagesLet's start creating a new workspace for keeping the entire ROS project files for this book.You can name the workspace ros_robotics_projects_ws.

Download or clone the source code of the book from GitHub using the following link.

$ git clone https://github.com/qboticslabs/ros_robotics_projects

Page 13: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing
Page 14: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

36  第 2 章

usb_cam 获取的 ROS 图像消息转换为与 OpenCV 一致的数据类型 cv :: Mat。在转换为 cv :: Mat 后,我们可以使用 OpenCV API 处理摄像头图像。

图 2-14 显示了 cv_bridge 在这个项目中的功能。

人脸跟踪 ROS 节点

V4L 设备 usb_cam 节点

ROS

ROS 图像消息

CvBridge

OpenCV cv::Mat

图 2-14 cv_bridge 的功能

这里,cv_bridge 是在 usb_cam 节点和人脸跟踪节点之间工作的。在下一节中,我们将

详细解读人脸跟踪节点。在此之前,如果你了解它的工作原理,就再好不过了。

image_transport 功能包 (http://wiki.ros.org/image_transport) 主要用于传输两个 ROS 节

点之间的图像消息。该数据包总是用于接收和输出 ROS 图像数据。数据包可以通过压缩技

术帮助我们在低带宽条件下传输图像。这个包在 ROS 系统安装时就包含了。

这节是关于 OpenCV 和 ROS 接口的内容。在下一节中,我们将使用这个项目的第一个

功能包 face_tracker_pkg。

项目的完整代码可以从下面的 Git 目录中得到。命令如下:

$ git clone https://github.com/qboticslabs/ros_robotics_projects

2.5 人脸跟踪功能包的工作原理

我们已经创建或复制了 face_tracker_pkg 功能包到工作区,并介绍了它的一些重要的依

赖关系。现在,开始学习这个功能包的用途 !功能包包括 face_tracker_node ROS 节点,该节点通过 OpenCV API 跟踪人脸,并将脸

的中心位置发送到主题。图 2-15 是 face_tracker_node 的工作框图。

关于 face_tracker_node 接口部分的内容,其中你可能不熟悉的部分是人脸 Haar 分类器。

Page 15: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  37

人脸 Haar 分类器

人脸人脸跟踪节点主题:/usb_cam/image_ram

消息:sensor_msgs/Image

主题:/face_centroid消息:centroid

usb_cam 节点

face_tracker_controller

图 2-15 face_tracker_node 的框图

人脸 Haar 分类器:基于 Haar 特性的级联分类器是一种用于检测物体的机器学习方法。

这一方法是由保罗·维奥拉(Paul Viola)和迈克尔·琼斯(Michael Jones)在他们 2001 年

的论文“ Rapid Object detection using a boosted cascade of simple features”(基于增强级联简

单特性的快速物体检测方法)中提出的。在这种方法中,一个级联文件使用正反样本图像进

行训练,在训练后,该文件将用于物体检测。

● 本次项目选择使用训练过的 Haar 分类器文件和 OpenCV 代码。从 OpenCV 数据

文件夹中可以获取 Haar 分类器文件(https://github.com/opencv/opencv/tree/master/data)。根据程序的需要来对所需的 Haar 文件进行替换。人脸分类器是一个包含人

脸特征的 XML 文件。若特征信息与 XML 中的匹配,可通过 OpenCV API 从图像中

检索人脸的兴趣区域(Region Of Interest,ROI)。可以从 face_tracker_pkg/data/face.xml 查看这个项目的 Haar 分类器。

track.yaml :ROS 参数文件,其中包含 Haar 分类器文件的路径、输入图像主题、输出

图像主题、启用和禁用人脸跟踪的标志位。使用 ROS 配置文件可以在不用修改面部跟踪

代码的情况下更改节点参数。可以在 face_tracker_pkg/config/track.xml 中获取这个文件。

usb_cam 节点:usb_cam 功能包节点的作用是从摄像头中输出图像流信息到 ROS 图像

消息中。usb_cam 节点将摄像头图像发送到 /usb_cam/raw_image 主题,这个主题将利用人

脸跟踪节点对人脸进行检测。如果有需要可以通过修改 track.yaml 文件更改输入主题。

face_tracker_control :face_tracker_pkg 功能包主要用来检测人脸并在图像中找到脸的

中心位置。中心包括两个值,X 和 Y。通过自定义信息来发送中心值。这些中心值被控制器

节点接收,再通过该节点控制 Dynamixel 伺服舵机的转动来跟踪人脸。

图 2-16 是 face_tracker_pkg 的文件结构。

接下来,我们来了解一下人脸跟踪代码是如何工作的。可以访问 face_tracker_pkg/src/face_tracker_node.cpp 文件查看人脸跟踪代码。此代码实现对人脸进行检测,并向主题发送

中心值。

Page 16: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

38  第 2 章

图 2-16 face_tracker_pkg 的文件结构

接下来是对代码片段的解读。

2.5.1 理解人脸跟踪代码

下面程序显示了本项目需要用到的 ROS 头文件。其中需要注意的是 ros/ros.h 要在每

个 ROS 的 C++ 节点中进行声明,否则,代码将无法成功编译。其余三个头文件的用途分别

是:image_transoport 头文件,具有接收和输出低带宽图像消息的功能;cv_bridge 头文件,

具有转换 OpenCV 和 ROS 数据类型的功能;image_encodings 头文件,包含 ROS-OpenCV转换中使用的图像编码格式:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 62 ]

Let's see how the face-tracking code works. You can open the CPP file atface_tracker_pkg/src/face_tracker_node.cpp. This code performs the facedetection and sends the centroid value to a topic.

We'll look at, and understand, some code snippets.

Understanding the face tracker codeLet's start with the header file. The ROS header files we are using in the code lie here. Wehave to include ros/ros.h in every ROS C++ node; otherwise, the source code will notcompile. The remaining three headers are image-transport headers, which have functions topublish and subscribe to image messages in low bandwidth. The cv_bridge header hasfunctions to convert between OpenCV ROS data types. The image_encoding.h header hasthe image-encoding format used during ROS-OpenCV conversions:

#include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h>

The next set of headers is for OpenCV. The imgproc header consists of image-processingfunctions, highgui has GUI-related functions, and objdetect.hpp has APIs for objectdetection, such as the Haar classifier:

#include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include "opencv2/objdetect.hpp"

The last header file is for accessing a custom message called centroid. The centroidmessage definition has two fields, int32 x and int32 y. This can hold the centroid of thefile. You can check this message definition from theface_tracker_pkg/msg/centroid.msg folder:

#include <face_tracker_pkg/centroid.h>

The following lines of code give a name to the raw image window and face-detectionwindow:

static const std::string OPENCV_WINDOW = "raw_image_window"; static const std::string OPENCV_WINDOW_1 = "face_detector";

接下来是 OpenCV 头文件的设置。imgproc 头文件是由图像处理函数组成的,highgui具有 GUI 相关的功能,objdetect.hpp 是用于物体检测功能的 API,如 Haar 分类器。

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 62 ]

Let's see how the face-tracking code works. You can open the CPP file atface_tracker_pkg/src/face_tracker_node.cpp. This code performs the facedetection and sends the centroid value to a topic.

We'll look at, and understand, some code snippets.

Understanding the face tracker codeLet's start with the header file. The ROS header files we are using in the code lie here. Wehave to include ros/ros.h in every ROS C++ node; otherwise, the source code will notcompile. The remaining three headers are image-transport headers, which have functions topublish and subscribe to image messages in low bandwidth. The cv_bridge header hasfunctions to convert between OpenCV ROS data types. The image_encoding.h header hasthe image-encoding format used during ROS-OpenCV conversions:

#include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h>

The next set of headers is for OpenCV. The imgproc header consists of image-processingfunctions, highgui has GUI-related functions, and objdetect.hpp has APIs for objectdetection, such as the Haar classifier:

#include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include "opencv2/objdetect.hpp"

The last header file is for accessing a custom message called centroid. The centroidmessage definition has two fields, int32 x and int32 y. This can hold the centroid of thefile. You can check this message definition from theface_tracker_pkg/msg/centroid.msg folder:

#include <face_tracker_pkg/centroid.h>

The following lines of code give a name to the raw image window and face-detectionwindow:

static const std::string OPENCV_WINDOW = "raw_image_window"; static const std::string OPENCV_WINDOW_1 = "face_detector";

最后一个头文件是用于获取人脸中心位置的自定义消息。centroid 中心定义为:int32 x 和 int32 y。这样既可以对文件中的中心进行有效表示,也可以从 face_tracker_pkg/msg/centroid.msg 文件检查此消息定义。

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 62 ]

Let's see how the face-tracking code works. You can open the CPP file atface_tracker_pkg/src/face_tracker_node.cpp. This code performs the facedetection and sends the centroid value to a topic.

We'll look at, and understand, some code snippets.

Understanding the face tracker codeLet's start with the header file. The ROS header files we are using in the code lie here. Wehave to include ros/ros.h in every ROS C++ node; otherwise, the source code will notcompile. The remaining three headers are image-transport headers, which have functions topublish and subscribe to image messages in low bandwidth. The cv_bridge header hasfunctions to convert between OpenCV ROS data types. The image_encoding.h header hasthe image-encoding format used during ROS-OpenCV conversions:

#include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h>

The next set of headers is for OpenCV. The imgproc header consists of image-processingfunctions, highgui has GUI-related functions, and objdetect.hpp has APIs for objectdetection, such as the Haar classifier:

#include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include "opencv2/objdetect.hpp"

The last header file is for accessing a custom message called centroid. The centroidmessage definition has two fields, int32 x and int32 y. This can hold the centroid of thefile. You can check this message definition from theface_tracker_pkg/msg/centroid.msg folder:

#include <face_tracker_pkg/centroid.h>

The following lines of code give a name to the raw image window and face-detectionwindow:

static const std::string OPENCV_WINDOW = "raw_image_window"; static const std::string OPENCV_WINDOW_1 = "face_detector";

然后是通过代码对原始图像窗口和人脸检测窗口名称进行定义:

Page 17: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  39

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 62 ]

Let's see how the face-tracking code works. You can open the CPP file atface_tracker_pkg/src/face_tracker_node.cpp. This code performs the facedetection and sends the centroid value to a topic.

We'll look at, and understand, some code snippets.

Understanding the face tracker codeLet's start with the header file. The ROS header files we are using in the code lie here. Wehave to include ros/ros.h in every ROS C++ node; otherwise, the source code will notcompile. The remaining three headers are image-transport headers, which have functions topublish and subscribe to image messages in low bandwidth. The cv_bridge header hasfunctions to convert between OpenCV ROS data types. The image_encoding.h header hasthe image-encoding format used during ROS-OpenCV conversions:

#include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h>

The next set of headers is for OpenCV. The imgproc header consists of image-processingfunctions, highgui has GUI-related functions, and objdetect.hpp has APIs for objectdetection, such as the Haar classifier:

#include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include "opencv2/objdetect.hpp"

The last header file is for accessing a custom message called centroid. The centroidmessage definition has two fields, int32 x and int32 y. This can hold the centroid of thefile. You can check this message definition from theface_tracker_pkg/msg/centroid.msg folder:

#include <face_tracker_pkg/centroid.h>

The following lines of code give a name to the raw image window and face-detectionwindow:

static const std::string OPENCV_WINDOW = "raw_image_window"; static const std::string OPENCV_WINDOW_1 = "face_detector";

下面的代码主要是为面部检测器创建一个 C++ 类。在类中创建了 NodeHandle 的句柄,

它是一个 ROS 节点的强制句柄,image_transport 函数是用来发送 ROS 图像消息给 ROS 图

像进行计算的,并且可以通过自定义的 centroid.msg 文件输出中心值。其余的定义用于处理

来自参数文件 track.yaml 的参数值。

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 63 ]

The following lines of code create a C++ class for our face detector. The code snippet iscreates handles of NodeHandle, which is a mandatory handle for a ROS node;image_transport, which helps send ROS Image messages across the ROS computinggraph; and a publisher for the face centroid, which can publish the centroid values using thecentroid.msg file defined by us. The remaining definitions are for handling parametervalues from the parameter file track.yaml:

class Face_Detector { ros::NodeHandle nh_;

image_transport::ImageTransport it_;

image_transport::Subscriber image_sub_;

image_transport::Publisher image_pub_;

ros::Publisher face_centroid_pub;

face_tracker_pkg::centroid face_centroid;

string input_image_topic, output_image_topic, haar_file_face;

int face_tracking, display_original_image, display_tracking_image, center_offset, screenmaxx;

The following is the code for retrieving ROS parameters inside the track.yaml file. Theadvantage of using ROS parameters is that we can avoid hard-coding these values insidethe program and modify the values without recompiling the code:

try{ nh_.getParam("image_input_topic", input_image_topic); nh_.getParam("face_detected_image_topic", output_image_topic); nh_.getParam("haar_file_face", haar_file_face); nh_.getParam("face_tracking", face_tracking); nh_.getParam("display_original_image", display_original_image); nh_.getParam("display_tracking_image", display_tracking_image); nh_.getParam("center_offset", center_offset); nh_.getParam("screenmaxx", screenmaxx);

ROS_INFO("Successfully Loaded tracking parameters"); }

接下来是在 track.yaml 文件中检索 ROS 参数的代码。使用 ROS 参数的好处是,可以

避免在程序中硬编码这些参数,并能够在不重新编译代码的情况下修改值。

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 63 ]

The following lines of code create a C++ class for our face detector. The code snippet iscreates handles of NodeHandle, which is a mandatory handle for a ROS node;image_transport, which helps send ROS Image messages across the ROS computinggraph; and a publisher for the face centroid, which can publish the centroid values using thecentroid.msg file defined by us. The remaining definitions are for handling parametervalues from the parameter file track.yaml:

class Face_Detector { ros::NodeHandle nh_;

image_transport::ImageTransport it_;

image_transport::Subscriber image_sub_;

image_transport::Publisher image_pub_;

ros::Publisher face_centroid_pub;

face_tracker_pkg::centroid face_centroid;

string input_image_topic, output_image_topic, haar_file_face;

int face_tracking, display_original_image, display_tracking_image, center_offset, screenmaxx;

The following is the code for retrieving ROS parameters inside the track.yaml file. Theadvantage of using ROS parameters is that we can avoid hard-coding these values insidethe program and modify the values without recompiling the code:

try{ nh_.getParam("image_input_topic", input_image_topic); nh_.getParam("face_detected_image_topic", output_image_topic); nh_.getParam("haar_file_face", haar_file_face); nh_.getParam("face_tracking", face_tracking); nh_.getParam("display_original_image", display_original_image); nh_.getParam("display_tracking_image", display_tracking_image); nh_.getParam("center_offset", center_offset); nh_.getParam("screenmaxx", screenmaxx);

ROS_INFO("Successfully Loaded tracking parameters"); }

下面的代码创建了一个输入图像主题的订阅者和一个人脸检测图像的发布者。当一个

图像到达输入图像主题时,它将调用一个名为 imageCb 的函数。主题的名称将在 ROS 参数

中获取。代码片段的最后一行是创建一个新的发布者来输出中心值:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 64 ]

The following code creates a subscriber for the input image topic and publisher for the face-detected image. Whenever an image arrives on the input image topic, it will call a functioncalled imageCb. The names of the topics are retrieved from ROS parameters. We createanother publisher for publishing the centroid value, which is the last line of the codesnippet:

image_sub_ = it_.subscribe(input_image_topic, 1, &Face_Detector::imageCb, this); image_pub_ = it_.advertise(output_image_topic, 1);

face_centroid_pub = nh_.advertise<face_tracker_pkg::centroid> ("/face_centroid",10);

The next bit of code is the definition of imageCb, which is a callback forinput_image_topic. What it basically does is it converts the sensor_msgs/Image datainto the cv::Mat OpenCV data type. The cv_bridge::CvImagePtr cv_ptr buffer isallocated for storing the OpenCV image after performing the ROS-OpenCV conversionusing the cv_bridge::toCvCopy function:

void imageCb(const sensor_msgs::ImageConstPtr& msg) {

cv_bridge::CvImagePtr cv_ptr; namespace enc = sensor_msgs::image_encodings;

try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); }

We have already discussed the Haar classifier; here is the code to load the Haar classifierfile:

string cascadeName = haar_file_face; CascadeClassifier cascade; if( !cascade.load( cascadeName ) ) { cerr << "ERROR: Could not load classifier cascade" << endl; }

Page 18: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

40  第 2 章

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 64 ]

The following code creates a subscriber for the input image topic and publisher for the face-detected image. Whenever an image arrives on the input image topic, it will call a functioncalled imageCb. The names of the topics are retrieved from ROS parameters. We createanother publisher for publishing the centroid value, which is the last line of the codesnippet:

image_sub_ = it_.subscribe(input_image_topic, 1, &Face_Detector::imageCb, this); image_pub_ = it_.advertise(output_image_topic, 1);

face_centroid_pub = nh_.advertise<face_tracker_pkg::centroid> ("/face_centroid",10);

The next bit of code is the definition of imageCb, which is a callback forinput_image_topic. What it basically does is it converts the sensor_msgs/Image datainto the cv::Mat OpenCV data type. The cv_bridge::CvImagePtr cv_ptr buffer isallocated for storing the OpenCV image after performing the ROS-OpenCV conversionusing the cv_bridge::toCvCopy function:

void imageCb(const sensor_msgs::ImageConstPtr& msg) {

cv_bridge::CvImagePtr cv_ptr; namespace enc = sensor_msgs::image_encodings;

try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); }

We have already discussed the Haar classifier; here is the code to load the Haar classifierfile:

string cascadeName = haar_file_face; CascadeClassifier cascade; if( !cascade.load( cascadeName ) ) { cerr << "ERROR: Could not load classifier cascade" << endl; }

下一段代码是 imageCb 的定义,它是 input_image_topic 的回调函数。主要作用是将

sensor_msgs/Image 数据转换成 cv::Mat 的 OpenCV 数据类型。使用 cv_bridge::toCvCopy 函

数执行 ros-OpenCV 转换后,通过 cv_bridge::CvImagePtr cv_ptr 缓冲区存储 OpenCV 图像:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 64 ]

The following code creates a subscriber for the input image topic and publisher for the face-detected image. Whenever an image arrives on the input image topic, it will call a functioncalled imageCb. The names of the topics are retrieved from ROS parameters. We createanother publisher for publishing the centroid value, which is the last line of the codesnippet:

image_sub_ = it_.subscribe(input_image_topic, 1, &Face_Detector::imageCb, this); image_pub_ = it_.advertise(output_image_topic, 1);

face_centroid_pub = nh_.advertise<face_tracker_pkg::centroid> ("/face_centroid",10);

The next bit of code is the definition of imageCb, which is a callback forinput_image_topic. What it basically does is it converts the sensor_msgs/Image datainto the cv::Mat OpenCV data type. The cv_bridge::CvImagePtr cv_ptr buffer isallocated for storing the OpenCV image after performing the ROS-OpenCV conversionusing the cv_bridge::toCvCopy function:

void imageCb(const sensor_msgs::ImageConstPtr& msg) {

cv_bridge::CvImagePtr cv_ptr; namespace enc = sensor_msgs::image_encodings;

try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); }

We have already discussed the Haar classifier; here is the code to load the Haar classifierfile:

string cascadeName = haar_file_face; CascadeClassifier cascade; if( !cascade.load( cascadeName ) ) { cerr << "ERROR: Could not load classifier cascade" << endl; }

之前已经介绍过 Haar 分类器,此处为加载 Haar 分类器文件的代码:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 64 ]

The following code creates a subscriber for the input image topic and publisher for the face-detected image. Whenever an image arrives on the input image topic, it will call a functioncalled imageCb. The names of the topics are retrieved from ROS parameters. We createanother publisher for publishing the centroid value, which is the last line of the codesnippet:

image_sub_ = it_.subscribe(input_image_topic, 1, &Face_Detector::imageCb, this); image_pub_ = it_.advertise(output_image_topic, 1);

face_centroid_pub = nh_.advertise<face_tracker_pkg::centroid> ("/face_centroid",10);

The next bit of code is the definition of imageCb, which is a callback forinput_image_topic. What it basically does is it converts the sensor_msgs/Image datainto the cv::Mat OpenCV data type. The cv_bridge::CvImagePtr cv_ptr buffer isallocated for storing the OpenCV image after performing the ROS-OpenCV conversionusing the cv_bridge::toCvCopy function:

void imageCb(const sensor_msgs::ImageConstPtr& msg) {

cv_bridge::CvImagePtr cv_ptr; namespace enc = sensor_msgs::image_encodings;

try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); }

We have already discussed the Haar classifier; here is the code to load the Haar classifierfile:

string cascadeName = haar_file_face; CascadeClassifier cascade; if( !cascade.load( cascadeName ) ) { cerr << "ERROR: Could not load classifier cascade" << endl; }

接下来是程序的核心部分,即对从 ROS 图像消息转换的 OpenCV 图像数据类型

进行面部检测。下面是 detectAndDraw() 的函数调用,作用是执行面部检测功能,第

二行是输出图像的主题。使用 cv_ptr->image 可以检索 cv::Mat 数据类型,第二行中

的 cv_ptr->toImageMsg() 可将其转换为 ROS 图像消息。detectAndDraw() 函数的参数是

OpenCV 图像和级联变量:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 65 ]

We are now moving to the core part of the program, which is the detection of the faceperformed on the converted OpenCV image data type from the ROS Image message. Thefollowing is the function call of detectAndDraw(), which is performing the face detection,and in the last line, you can see the output image topic being published. Usingcv_ptr->image, we can retrieve the cv::Mat data type, and in the next line,cv_ptr->toImageMsg() can convert this into a ROSImage message. The arguments of thedetectAndDraw() function are the OpenCV image and cascade variables:

detectAndDraw( cv_ptr->image, cascade ); image_pub_.publish(cv_ptr->toImageMsg());

Let's understand the detectAndDraw() function, which is adopted from the OpenCVsample code for face detection: The function arguments are the input image and cascadeobject. The next bit of code will convert the image into grayscale first and equalize thehistogram using OpenCV APIs. This is a kind of preprocessing before detecting the facefrom the image. The cascade.detectMultiScale() function is used for this purpose(http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html).

Mat gray, smallImg; cvtColor( img, gray, COLOR_BGR2GRAY ); double fx = 1 / scale ; resize( gray, smallImg, Size(), fx, fx, INTER_LINEAR ); equalizeHist( smallImg, smallImg ); t = (double)cvGetTickCount(); cascade.detectMultiScale( smallImg, faces, 1.1, 15, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );

The following loop will iterate on each face that is detected using thedetectMultiScale() function. For each face, it finds the centroid and publishes to the/face_centroid topic:

for ( size_t i = 0; i < faces.size(); i++ ) { Rect r = faces[i]; Mat smallImgROI; vector<Rect> nestedObjects; Point center; Scalar color = colors[i%8]; int radius;

double aspect_ratio = (double)r.width/r.height; if( 0.75 < aspect_ratio && aspect_ratio < 1.3 ) { center.x = cvRound((r.x + r.width*0.5)*scale);

下面解释一下 detectAndDraw() 函数,该函数是 OpenCV 示例代码中用于人脸检测的,

函数参数是输入图像和级联对象。下列代码首先是将图像转换为灰度图,并使用 OpenCV API 平衡直方图。这是从图像中检测人脸前的一种预处理。cascade.detectMultiScale() 函数

的用途参考 http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html。

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 65 ]

We are now moving to the core part of the program, which is the detection of the faceperformed on the converted OpenCV image data type from the ROS Image message. Thefollowing is the function call of detectAndDraw(), which is performing the face detection,and in the last line, you can see the output image topic being published. Usingcv_ptr->image, we can retrieve the cv::Mat data type, and in the next line,cv_ptr->toImageMsg() can convert this into a ROSImage message. The arguments of thedetectAndDraw() function are the OpenCV image and cascade variables:

detectAndDraw( cv_ptr->image, cascade ); image_pub_.publish(cv_ptr->toImageMsg());

Let's understand the detectAndDraw() function, which is adopted from the OpenCVsample code for face detection: The function arguments are the input image and cascadeobject. The next bit of code will convert the image into grayscale first and equalize thehistogram using OpenCV APIs. This is a kind of preprocessing before detecting the facefrom the image. The cascade.detectMultiScale() function is used for this purpose(http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html).

Mat gray, smallImg; cvtColor( img, gray, COLOR_BGR2GRAY ); double fx = 1 / scale ; resize( gray, smallImg, Size(), fx, fx, INTER_LINEAR ); equalizeHist( smallImg, smallImg ); t = (double)cvGetTickCount(); cascade.detectMultiScale( smallImg, faces, 1.1, 15, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );

The following loop will iterate on each face that is detected using thedetectMultiScale() function. For each face, it finds the centroid and publishes to the/face_centroid topic:

for ( size_t i = 0; i < faces.size(); i++ ) { Rect r = faces[i]; Mat smallImgROI; vector<Rect> nestedObjects; Point center; Scalar color = colors[i%8]; int radius;

double aspect_ratio = (double)r.width/r.height; if( 0.75 < aspect_ratio && aspect_ratio < 1.3 ) { center.x = cvRound((r.x + r.width*0.5)*scale);

Page 19: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  41

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 65 ]

We are now moving to the core part of the program, which is the detection of the faceperformed on the converted OpenCV image data type from the ROS Image message. Thefollowing is the function call of detectAndDraw(), which is performing the face detection,and in the last line, you can see the output image topic being published. Usingcv_ptr->image, we can retrieve the cv::Mat data type, and in the next line,cv_ptr->toImageMsg() can convert this into a ROSImage message. The arguments of thedetectAndDraw() function are the OpenCV image and cascade variables:

detectAndDraw( cv_ptr->image, cascade ); image_pub_.publish(cv_ptr->toImageMsg());

Let's understand the detectAndDraw() function, which is adopted from the OpenCVsample code for face detection: The function arguments are the input image and cascadeobject. The next bit of code will convert the image into grayscale first and equalize thehistogram using OpenCV APIs. This is a kind of preprocessing before detecting the facefrom the image. The cascade.detectMultiScale() function is used for this purpose(http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html).

Mat gray, smallImg; cvtColor( img, gray, COLOR_BGR2GRAY ); double fx = 1 / scale ; resize( gray, smallImg, Size(), fx, fx, INTER_LINEAR ); equalizeHist( smallImg, smallImg ); t = (double)cvGetTickCount(); cascade.detectMultiScale( smallImg, faces, 1.1, 15, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );

The following loop will iterate on each face that is detected using thedetectMultiScale() function. For each face, it finds the centroid and publishes to the/face_centroid topic:

for ( size_t i = 0; i < faces.size(); i++ ) { Rect r = faces[i]; Mat smallImgROI; vector<Rect> nestedObjects; Point center; Scalar color = colors[i%8]; int radius;

double aspect_ratio = (double)r.width/r.height; if( 0.75 < aspect_ratio && aspect_ratio < 1.3 ) { center.x = cvRound((r.x + r.width*0.5)*scale);

下面的循环将使用 detectmultiscale() 函数对每一次被检测到的人脸进行迭代。对于每

一张人脸,它都会找到中心并输出到 /face_centroid 主题:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 65 ]

We are now moving to the core part of the program, which is the detection of the faceperformed on the converted OpenCV image data type from the ROS Image message. Thefollowing is the function call of detectAndDraw(), which is performing the face detection,and in the last line, you can see the output image topic being published. Usingcv_ptr->image, we can retrieve the cv::Mat data type, and in the next line,cv_ptr->toImageMsg() can convert this into a ROSImage message. The arguments of thedetectAndDraw() function are the OpenCV image and cascade variables:

detectAndDraw( cv_ptr->image, cascade ); image_pub_.publish(cv_ptr->toImageMsg());

Let's understand the detectAndDraw() function, which is adopted from the OpenCVsample code for face detection: The function arguments are the input image and cascadeobject. The next bit of code will convert the image into grayscale first and equalize thehistogram using OpenCV APIs. This is a kind of preprocessing before detecting the facefrom the image. The cascade.detectMultiScale() function is used for this purpose(http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html).

Mat gray, smallImg; cvtColor( img, gray, COLOR_BGR2GRAY ); double fx = 1 / scale ; resize( gray, smallImg, Size(), fx, fx, INTER_LINEAR ); equalizeHist( smallImg, smallImg ); t = (double)cvGetTickCount(); cascade.detectMultiScale( smallImg, faces, 1.1, 15, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );

The following loop will iterate on each face that is detected using thedetectMultiScale() function. For each face, it finds the centroid and publishes to the/face_centroid topic:

for ( size_t i = 0; i < faces.size(); i++ ) { Rect r = faces[i]; Mat smallImgROI; vector<Rect> nestedObjects; Point center; Scalar color = colors[i%8]; int radius;

double aspect_ratio = (double)r.width/r.height; if( 0.75 < aspect_ratio && aspect_ratio < 1.3 ) { center.x = cvRound((r.x + r.width*0.5)*scale);

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 66 ]

center.y = cvRound((r.y + r.height*0.5)*scale); radius = cvRound((r.width + r.height)*0.25*scale); circle( img, center, radius, color, 3, 8, 0 );

face_centroid.x = center.x; face_centroid.y = center.y;

//Publishing centroid of detected face face_centroid_pub.publish(face_centroid);

}

To make the output image window more interactive, there are text and lines to alert aboutthe user's face on the left or right or at the center. This last section of code is mainly for thatpurpose. It uses OpenCV APIs to do this job. Here is the code to display text such as Left,Right, and Center on the screen:

putText(img, "Left", cvPoint(50,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(255,0,0), 2, CV_AA); putText(img, "Center", cvPoint(280,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(0,0,255), 2, CV_AA); putText(img, "Right", cvPoint(480,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(255,0,0), 2, CV_AA);

Excellent! We're done with the tracker code; let's see how to build it and make it executable.

Understanding CMakeLists.txtThe default CMakeLists.txt file made during the creation of the package has to be editedin order to compile the previous source code. Here is the CMakeLists.txt file used tobuild the face_tracker_node.cpp class.

The first two lines state the minimum version of cmake required to build this package, andnext line is the package name:

cmake_minimum_required(VERSION 2.8.3) project(face_tracker_pkg)

为了使输出图像窗口更具交互性,可以使用文字和线段对用户面部的左侧、右侧或中

心进行标记。代码的最后一部分就是使用 OpenCV API 完成这项工作。以下代码是在屏幕

上显示文字信息,如在屏幕上显示左边部分、右边部分和中心部分:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 66 ]

center.y = cvRound((r.y + r.height*0.5)*scale); radius = cvRound((r.width + r.height)*0.25*scale); circle( img, center, radius, color, 3, 8, 0 );

face_centroid.x = center.x; face_centroid.y = center.y;

//Publishing centroid of detected face face_centroid_pub.publish(face_centroid);

}

To make the output image window more interactive, there are text and lines to alert aboutthe user's face on the left or right or at the center. This last section of code is mainly for thatpurpose. It uses OpenCV APIs to do this job. Here is the code to display text such as Left,Right, and Center on the screen:

putText(img, "Left", cvPoint(50,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(255,0,0), 2, CV_AA); putText(img, "Center", cvPoint(280,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(0,0,255), 2, CV_AA); putText(img, "Right", cvPoint(480,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(255,0,0), 2, CV_AA);

Excellent! We're done with the tracker code; let's see how to build it and make it executable.

Understanding CMakeLists.txtThe default CMakeLists.txt file made during the creation of the package has to be editedin order to compile the previous source code. Here is the CMakeLists.txt file used tobuild the face_tracker_node.cpp class.

The first two lines state the minimum version of cmake required to build this package, andnext line is the package name:

cmake_minimum_required(VERSION 2.8.3) project(face_tracker_pkg)

到这里我们已经完成了人脸跟踪代码,接下来看看如何编译并执行。

2.5.2 理解 CMakeLists.txt为了编译之前的代码,需要对功能包创建过程中默认生成的 CMakeLists.txt 文件进行

Page 20: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

42  第 2 章

编辑,使 CMakeLists.txt 编译 face_tracker_node.cpp。前两行声明的是编译这个功能包需要的 cmake 最低版本,第二行是功能包名称 :

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 66 ]

center.y = cvRound((r.y + r.height*0.5)*scale); radius = cvRound((r.width + r.height)*0.25*scale); circle( img, center, radius, color, 3, 8, 0 );

face_centroid.x = center.x; face_centroid.y = center.y;

//Publishing centroid of detected face face_centroid_pub.publish(face_centroid);

}

To make the output image window more interactive, there are text and lines to alert aboutthe user's face on the left or right or at the center. This last section of code is mainly for thatpurpose. It uses OpenCV APIs to do this job. Here is the code to display text such as Left,Right, and Center on the screen:

putText(img, "Left", cvPoint(50,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(255,0,0), 2, CV_AA); putText(img, "Center", cvPoint(280,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(0,0,255), 2, CV_AA); putText(img, "Right", cvPoint(480,240), FONT_HERSHEY_SIMPLEX, 1, cvScalar(255,0,0), 2, CV_AA);

Excellent! We're done with the tracker code; let's see how to build it and make it executable.

Understanding CMakeLists.txtThe default CMakeLists.txt file made during the creation of the package has to be editedin order to compile the previous source code. Here is the CMakeLists.txt file used tobuild the face_tracker_node.cpp class.

The first two lines state the minimum version of cmake required to build this package, andnext line is the package name:

cmake_minimum_required(VERSION 2.8.3) project(face_tracker_pkg)

下列代码是对 face_tracker_pkg 的依赖包进行搜索,若无法找到则会报错:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 67 ]

The following line searches for the dependent packages of face_tracker_pkg and raisesan error if it is not found:

find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport roscpp rospy sensor_msgs std_msgs message_generation

)

This line of code contains the system-level dependencies for building the package:

find_package(Boost REQUIRED COMPONENTS system)

As we've already seen, we are using a custom message definition called centroid.msg,which contains two fields, int32 x and int32 y. To build and generate C++ equivalentheaders, we should use the following lines:

add_message_files( FILES centroid.msg )

## Generate added messages and services with any dependencies listed here generate_messages( DEPENDENCIES std_msgs )

The catkin_package() function is a catkin-provided CMake macro that is required togenerate pkg-config and CMake files.

catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime ) include_directories( ${catkin_INCLUDE_DIRS} )

这一行代码是编译包的系统等级依赖条件:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 67 ]

The following line searches for the dependent packages of face_tracker_pkg and raisesan error if it is not found:

find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport roscpp rospy sensor_msgs std_msgs message_generation

)

This line of code contains the system-level dependencies for building the package:

find_package(Boost REQUIRED COMPONENTS system)

As we've already seen, we are using a custom message definition called centroid.msg,which contains two fields, int32 x and int32 y. To build and generate C++ equivalentheaders, we should use the following lines:

add_message_files( FILES centroid.msg )

## Generate added messages and services with any dependencies listed here generate_messages( DEPENDENCIES std_msgs )

The catkin_package() function is a catkin-provided CMake macro that is required togenerate pkg-config and CMake files.

catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime ) include_directories( ${catkin_INCLUDE_DIRS} )

我们使用的自定义消息 centroid.msg 包含两个部分:int32 x 和 int32 y。使用下面的代

码,编译和生成 C++ 等效的头文件:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 67 ]

The following line searches for the dependent packages of face_tracker_pkg and raisesan error if it is not found:

find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport roscpp rospy sensor_msgs std_msgs message_generation

)

This line of code contains the system-level dependencies for building the package:

find_package(Boost REQUIRED COMPONENTS system)

As we've already seen, we are using a custom message definition called centroid.msg,which contains two fields, int32 x and int32 y. To build and generate C++ equivalentheaders, we should use the following lines:

add_message_files( FILES centroid.msg )

## Generate added messages and services with any dependencies listed here generate_messages( DEPENDENCIES std_msgs )

The catkin_package() function is a catkin-provided CMake macro that is required togenerate pkg-config and CMake files.

catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime ) include_directories( ${catkin_INCLUDE_DIRS} )

catkin_package() 函数是一个 catkin-provided 的 CMake 宏,用于生成 pkg–config 文件

和 CMake 文件。

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 67 ]

The following line searches for the dependent packages of face_tracker_pkg and raisesan error if it is not found:

find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport roscpp rospy sensor_msgs std_msgs message_generation

)

This line of code contains the system-level dependencies for building the package:

find_package(Boost REQUIRED COMPONENTS system)

As we've already seen, we are using a custom message definition called centroid.msg,which contains two fields, int32 x and int32 y. To build and generate C++ equivalentheaders, we should use the following lines:

add_message_files( FILES centroid.msg )

## Generate added messages and services with any dependencies listed here generate_messages( DEPENDENCIES std_msgs )

The catkin_package() function is a catkin-provided CMake macro that is required togenerate pkg-config and CMake files.

catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime ) include_directories( ${catkin_INCLUDE_DIRS} )

这里,需创建一个名为 face_tracker_node 的可执行文件,同时将其链接到 catkin 和

OpenCV 库中:

Page 21: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  43

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 68 ]

Here, we are creating the executable called face_tracker_node and linking it to catkinand OpenCV libraries:

add_executable(face_tracker_node src/face_tracker_node.cpp) target_link_libraries(face_tracker_node ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} )

The track.yaml fileAs we discussed, the track.yaml file contains ROS parameters, which are required by theface_tracker_node. Here are the contents of track.yaml:

image_input_topic: "/usb_cam/image_raw" face_detected_image_topic: "/face_detector/raw_image" haar_file_face: "/home/robot/ros_robotics_projects_ws/ src/face_tracker_pkg/data/face.xml" face_tracking: 1 display_original_image: 1 display_tracking_image: 1

You can change all the parameters according to your needs. Especially, you may need tochange haar_file_face, which is the path of the Haar face file. If we setface_tracking:1, it will enable face tracking, otherwise not. Also, if you want to displaythe original and face-tracking image, you can set the flag here.

The launch filesThe launch files in ROS can do multiple tasks in a single file. The launch files have anextension of .launch. The following code shows the definition ofstart_usb_cam.launch, which starts the usb_cam node for publishing the camera imageas a ROS topic:

<launch> <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" > <param name="video_device" value="/dev/video0" /> <param name="image_width" value="640" /> <param name="image_height" value="480" /> <param name="pixel_format" value="yuyv" /> <param name="camera_frame_id" value="usb_cam" />

2.5.3 track.yaml 文件

如前所述,track.yaml 文件包含 face_tracker_node 所需的 ROS 参数。如下是 track.yaml的内容:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 68 ]

Here, we are creating the executable called face_tracker_node and linking it to catkinand OpenCV libraries:

add_executable(face_tracker_node src/face_tracker_node.cpp) target_link_libraries(face_tracker_node ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} )

The track.yaml fileAs we discussed, the track.yaml file contains ROS parameters, which are required by theface_tracker_node. Here are the contents of track.yaml:

image_input_topic: "/usb_cam/image_raw" face_detected_image_topic: "/face_detector/raw_image" haar_file_face: "/home/robot/ros_robotics_projects_ws/ src/face_tracker_pkg/data/face.xml" face_tracking: 1 display_original_image: 1 display_tracking_image: 1

You can change all the parameters according to your needs. Especially, you may need tochange haar_file_face, which is the path of the Haar face file. If we setface_tracking:1, it will enable face tracking, otherwise not. Also, if you want to displaythe original and face-tracking image, you can set the flag here.

The launch filesThe launch files in ROS can do multiple tasks in a single file. The launch files have anextension of .launch. The following code shows the definition ofstart_usb_cam.launch, which starts the usb_cam node for publishing the camera imageas a ROS topic:

<launch> <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" > <param name="video_device" value="/dev/video0" /> <param name="image_width" value="640" /> <param name="image_height" value="480" /> <param name="pixel_format" value="yuyv" /> <param name="camera_frame_id" value="usb_cam" />

可以根据需要更改所有的参数。特别需要注意的是 haar_file_face,这是 Haar 人脸文件

的路径。如果我们设置 face_tracking:1,它将启用人脸跟踪,否则不会。另外,如果想要显

示原始图像和面部跟踪图像,也可以在这里设置标志位。

2.5.4 启动文件

ROS 的启动文件可以在一个文件中完成多个任务。启动文件是一个扩展名为 . launch的文件。下面的代码显示的是 start_usb_cam.launch 的内容。其作用是启动 usb_cam 节点将

摄像头图像输出给 ROS 主题:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 68 ]

Here, we are creating the executable called face_tracker_node and linking it to catkinand OpenCV libraries:

add_executable(face_tracker_node src/face_tracker_node.cpp) target_link_libraries(face_tracker_node ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} )

The track.yaml fileAs we discussed, the track.yaml file contains ROS parameters, which are required by theface_tracker_node. Here are the contents of track.yaml:

image_input_topic: "/usb_cam/image_raw" face_detected_image_topic: "/face_detector/raw_image" haar_file_face: "/home/robot/ros_robotics_projects_ws/ src/face_tracker_pkg/data/face.xml" face_tracking: 1 display_original_image: 1 display_tracking_image: 1

You can change all the parameters according to your needs. Especially, you may need tochange haar_file_face, which is the path of the Haar face file. If we setface_tracking:1, it will enable face tracking, otherwise not. Also, if you want to displaythe original and face-tracking image, you can set the flag here.

The launch filesThe launch files in ROS can do multiple tasks in a single file. The launch files have anextension of .launch. The following code shows the definition ofstart_usb_cam.launch, which starts the usb_cam node for publishing the camera imageas a ROS topic:

<launch> <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" > <param name="video_device" value="/dev/video0" /> <param name="image_width" value="640" /> <param name="image_height" value="480" /> <param name="pixel_format" value="yuyv" /> <param name="camera_frame_id" value="usb_cam" />

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 69 ]

<param name="auto_focus" value="false" /> <param name="io_method" value="mmap"/> </node> </launch>

Within the <node>…</node> tags, there are camera parameters that can be change by theuser. For example, if you have multiple cameras, you can change the video_device valuefrom /dev/video0 to /dev/video1 to get the second camera's frames.

The next important launch file is start_tracking.launch, which will launch the face-tracker node. Here is the definition of this launch file:

<launch> <!-- Launching USB CAM launch files and Dynamixel controllers --> <include file="$(find face_tracker_pkg)/launch/start_usb_cam.launch"/>

<!-- Starting face tracker node --> <rosparam file="$(find face_tracker_pkg)/config/track.yaml" command="load"/>

<node name="face_tracker" pkg="face_tracker_pkg" type="face_tracker_node" output="screen" /> </launch>

It will first start the start_usb_cam.launch file in order to get ROS image topics, thenload track.yaml to get necessary ROS parameters, and then load face_tracker_node tostart tracking.

The final launch file is start_dynamixel_tracking.launch; this is the launch file wehave to execute for tracking and Dynamixel control. We will discuss this launch file at theend of the chapter after discussing the face_tracker_control package.

Running the face tracker nodeLet's launch the start_tracking.launch file from face_tracker_pkg using the following command. Note that you should connect your webcam to your PC:

$ roslaunch face_tracker_pkg start_tracking.launch

在 <node>...</node> 中,摄像头参数可以由用户调整。例如,如果你有多个摄像头,

可以将 video_device 值从 /dev/video0 更改为 /dev/video1,从而获得第二个摄像头的视

频帧。

下一个重要的启动文件是 start_tracking.launch,作用是启动 face-tracker 节点。下面是

该文件的内容:

Page 22: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

44  第 2 章

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 69 ]

<param name="auto_focus" value="false" /> <param name="io_method" value="mmap"/> </node> </launch>

Within the <node>…</node> tags, there are camera parameters that can be change by theuser. For example, if you have multiple cameras, you can change the video_device valuefrom /dev/video0 to /dev/video1 to get the second camera's frames.

The next important launch file is start_tracking.launch, which will launch the face-tracker node. Here is the definition of this launch file:

<launch> <!-- Launching USB CAM launch files and Dynamixel controllers --> <include file="$(find face_tracker_pkg)/launch/start_usb_cam.launch"/>

<!-- Starting face tracker node --> <rosparam file="$(find face_tracker_pkg)/config/track.yaml" command="load"/>

<node name="face_tracker" pkg="face_tracker_pkg" type="face_tracker_node" output="screen" /> </launch>

It will first start the start_usb_cam.launch file in order to get ROS image topics, thenload track.yaml to get necessary ROS parameters, and then load face_tracker_node tostart tracking.

The final launch file is start_dynamixel_tracking.launch; this is the launch file wehave to execute for tracking and Dynamixel control. We will discuss this launch file at theend of the chapter after discussing the face_tracker_control package.

Running the face tracker nodeLet's launch the start_tracking.launch file from face_tracker_pkg using the following command. Note that you should connect your webcam to your PC:

$ roslaunch face_tracker_pkg start_tracking.launch

首先启动 start_usb_cam.launch 文件获取 ROS 图像主题,然后从 track.yaml 获取必要

的 ROS 参数,接着是加载 face_tracker_node 开始进行跟踪。

最终的启动文件是 start_dynamixel_tracking.launch,这是我们需要执行的用于跟踪和

动态控制的启动文件。在介绍完 face_tracker_control 包之后,将在本章末尾讲述这个启动

文件。

2.5.5 运行人脸跟踪器节点

在使用如下命令从 face_tracker_pkg 启动 start_tracking.launch 之前,请注意网络摄像

头需要已经连接到电脑上:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 69 ]

<param name="auto_focus" value="false" /> <param name="io_method" value="mmap"/> </node> </launch>

Within the <node>…</node> tags, there are camera parameters that can be change by theuser. For example, if you have multiple cameras, you can change the video_device valuefrom /dev/video0 to /dev/video1 to get the second camera's frames.

The next important launch file is start_tracking.launch, which will launch the face-tracker node. Here is the definition of this launch file:

<launch> <!-- Launching USB CAM launch files and Dynamixel controllers --> <include file="$(find face_tracker_pkg)/launch/start_usb_cam.launch"/>

<!-- Starting face tracker node --> <rosparam file="$(find face_tracker_pkg)/config/track.yaml" command="load"/>

<node name="face_tracker" pkg="face_tracker_pkg" type="face_tracker_node" output="screen" /> </launch>

It will first start the start_usb_cam.launch file in order to get ROS image topics, thenload track.yaml to get necessary ROS parameters, and then load face_tracker_node tostart tracking.

The final launch file is start_dynamixel_tracking.launch; this is the launch file wehave to execute for tracking and Dynamixel control. We will discuss this launch file at theend of the chapter after discussing the face_tracker_control package.

Running the face tracker nodeLet's launch the start_tracking.launch file from face_tracker_pkg using the following command. Note that you should connect your webcam to your PC:

$ roslaunch face_tracker_pkg start_tracking.launch

如果一切正常,输出状态如图 2-17 所示,第一个是原始图像,第二个是人脸检测图像。

左 右中

图 2-17 人脸检测图

我们现在没有启用 Dynamixel 伺服舵机,这个节点只会找到人脸,并将中心值输出到

一个名为 /face_centroid 的主题上。

项目的第一部分完成了,下一步是什么呢?当然是控制部分,接下来我们将讲述第二

Page 23: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  45

个功能包 face_tracker_control。

2.5.6 face_tracker_control 功能包

face_tracker_control 功能包是使用 Dynamixel AX-12A 伺服舵机对人脸进行跟踪的功

能包。

face_tracker_control 功能包的文件结构如图 2-18 所示。

图 2-18 face_tracker_control 功能包中的文件结构

我们先来看看这些文件的使用情况。

1. start_dynamixel 启动文件

start_dynamixel 启动文件启动 Dynamixel 伺服舵机控制器,建立 USB-to-Dynamixel 适配器和 Dynamixel 伺服舵机的连接。这个启动文件的内容如下:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 71 ]

Given here is the file structure of the face_tracker_control package:

Figure 18: File organization in the face_tracker_control package

We'll look at the use of each of these files first.

The start_dynamixel launch fileThe start_dynamixel launch file starts Dynamixel Control Manager, which can establisha connection to a USB-to-Dynamixel adapter and Dynamixel servos. Here is the definitionof this launch file:

<!-- This will open USB To Dynamixel controller and search for servos --> <launch> <node name="dynamixel_manager" pkg="dynamixel_controllers" type="controller_manager.py" required="true" output="screen"> <rosparam> namespace: dxl_manager serial_ports: pan_port: port_name: "/dev/ttyUSB0" baud_rate: 1000000 min_motor_id: 1 max_motor_id: 25 update_rate: 20 </rosparam>

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 71 ]

Given here is the file structure of the face_tracker_control package:

Figure 18: File organization in the face_tracker_control package

We'll look at the use of each of these files first.

The start_dynamixel launch fileThe start_dynamixel launch file starts Dynamixel Control Manager, which can establisha connection to a USB-to-Dynamixel adapter and Dynamixel servos. Here is the definitionof this launch file:

<!-- This will open USB To Dynamixel controller and search for servos --> <launch> <node name="dynamixel_manager" pkg="dynamixel_controllers" type="controller_manager.py" required="true" output="screen"> <rosparam> namespace: dxl_manager serial_ports: pan_port: port_name: "/dev/ttyUSB0" baud_rate: 1000000 min_motor_id: 1 max_motor_id: 25 update_rate: 20 </rosparam>

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 72 ]

</node> <!-- This will launch the Dynamixel pan controller --> <include file="$(find face_tracker_control)/launch/start_pan_controller.launch"/> </launch>

We have to mention the port_name (you can get the port number from kernel logs usingthe dmesg command). The baud_rate we configured was 1 Mbps, and the motor ID was 1.The controller_manager.py file will scan from servo ID 1 to 25 and report any servosbeing detected.

After detecting the servo, it will start the start_pan_controller.launch file, which willattach a ROS joint position controller for each servo.

The pan controller launch fileAs we can see from the previous subsection, the pan controller launch file is the trigger forattaching the ROS controller to the detected servos. Here is the definition for thestart_pan_controller.launch file:

This will start the pan joint controller:

<launch> <!-- Start tilt joint controller --> <rosparam file="$(find face_tracker_control)/config/pan.yaml" command="load"/> <rosparam file="$(find face_tracker_control)/config/servo_param.yaml" command="load"/>

<node name="tilt_controller_spawner" pkg="dynamixel_controllers" type="controller_spawner.py" args="--manager=dxl_manager --port pan_port pan_controller" output="screen"/> </launch>

The controller_spawner.py node can spawn a controller for each detected servo. Theparameters of the controllers and servos are included in pan.yaml andservo_param.yaml.

Page 24: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

46  第 2 章

我们需要声明 port_name(可以通过 dmesg 指令从内核日志获取端口号),配置 baud_rate 是 1Mbps,电机 ID 是 1。controller_manager.py 文件将舵机 ID 从 1 扫描到 25,并报告

所有已经被检测的舵机。

在完成舵机检测之后,需要启动 start_pan_controller.launch 文件为舵机连接一个 ROS关节位置控制器。

2. 云台控制器启动文件

正如上一节所示,云台控制器启动文件是 ROS 控制器和所检测舵机相连的触发机制。

下面是 start_pan_controller.launch 文件的定义。

启动云台控制器:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 72 ]

</node> <!-- This will launch the Dynamixel pan controller --> <include file="$(find face_tracker_control)/launch/start_pan_controller.launch"/> </launch>

We have to mention the port_name (you can get the port number from kernel logs usingthe dmesg command). The baud_rate we configured was 1 Mbps, and the motor ID was 1.The controller_manager.py file will scan from servo ID 1 to 25 and report any servosbeing detected.

After detecting the servo, it will start the start_pan_controller.launch file, which willattach a ROS joint position controller for each servo.

The pan controller launch fileAs we can see from the previous subsection, the pan controller launch file is the trigger forattaching the ROS controller to the detected servos. Here is the definition for thestart_pan_controller.launch file:

This will start the pan joint controller:

<launch> <!-- Start tilt joint controller --> <rosparam file="$(find face_tracker_control)/config/pan.yaml" command="load"/> <rosparam file="$(find face_tracker_control)/config/servo_param.yaml" command="load"/>

<node name="tilt_controller_spawner" pkg="dynamixel_controllers" type="controller_spawner.py" args="--manager=dxl_manager --port pan_port pan_controller" output="screen"/> </launch>

The controller_spawner.py node can spawn a controller for each detected servo. Theparameters of the controllers and servos are included in pan.yaml andservo_param.yaml.

controller_spawner.py 节点可以为每个检测过的舵机生成一个控制器。控制器和舵机的

参数都保存在 pan.yaml 和 servo_param.yaml 文件中。

2.5.7 云台控制器配置文件

云台控制器配置文件包含将要创建的控制器检测节点的配置。控制器 pan.yaml 文件的

内容如下:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 73 ]

The pan controller configuration fileThe pan controller configuration file contains the configuration of the controller that thecontroller spawner node is going to create. Here is the pan.yaml file definition for ourcontroller:

pan_controller: controller: package: dynamixel_controllers module: joint_position_controller type: JointPositionController joint_name: pan_joint joint_speed: 1.17 motor: id: 1 init: 512 min: 316 max: 708

In this configuration file, we have to mention the servo details, such as ID, initial position,minimum and maximum servo limits, servo moving speed, and joint name. The name of thecontroller is pan_controller, and it's a joint position controller. We are writing onecontroller configuration for ID 1 because we are only using one servo.

The servo parameters configuration fileThe servo_param.yaml file contains the configuration of the pan_controller, such asthe limits of the controller and step distance of each movement; also, it has screenparameters such as the maximum resolution of the camera image and offset from the centerof the image. The offset is used to define an area around the actual center of the image:

servomaxx: 0.5 #max degree servo horizontal (x) can turn servomin: -0.5 # Min degree servo horizontal (x) can turn screenmaxx: 640 #max screen horizontal (x)resolution center_offset: 50 #offset pixels from actual center to right and left step_distancex: 0.01 #x servo rotation steps

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 73 ]

The pan controller configuration fileThe pan controller configuration file contains the configuration of the controller that thecontroller spawner node is going to create. Here is the pan.yaml file definition for ourcontroller:

pan_controller: controller: package: dynamixel_controllers module: joint_position_controller type: JointPositionController joint_name: pan_joint joint_speed: 1.17 motor: id: 1 init: 512 min: 316 max: 708

In this configuration file, we have to mention the servo details, such as ID, initial position,minimum and maximum servo limits, servo moving speed, and joint name. The name of thecontroller is pan_controller, and it's a joint position controller. We are writing onecontroller configuration for ID 1 because we are only using one servo.

The servo parameters configuration fileThe servo_param.yaml file contains the configuration of the pan_controller, such asthe limits of the controller and step distance of each movement; also, it has screenparameters such as the maximum resolution of the camera image and offset from the centerof the image. The offset is used to define an area around the actual center of the image:

servomaxx: 0.5 #max degree servo horizontal (x) can turn servomin: -0.5 # Min degree servo horizontal (x) can turn screenmaxx: 640 #max screen horizontal (x)resolution center_offset: 50 #offset pixels from actual center to right and left step_distancex: 0.01 #x servo rotation steps

在这个配置文件中,需要声明舵机的参数,如 ID、初始位置、最小和最大舵机范围、

Page 25: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  47

舵机移动速度和关节名称。控制器的名称是 pan_controller,它是一个关节位置控制器。若

只用到一个舵机,则只需要编写 ID 的 1 的控制器配置。

2.5.8 舵机参数配置文件

servo_param.yaml 文件包含了 pan_controller 的配置,如控制器的约束和每次移动的步

距大小,还有屏幕参数,如摄像头图像的最大分辨率和图像中心偏移量。偏移量用于定义

实际图像中心周围的区域:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 73 ]

The pan controller configuration fileThe pan controller configuration file contains the configuration of the controller that thecontroller spawner node is going to create. Here is the pan.yaml file definition for ourcontroller:

pan_controller: controller: package: dynamixel_controllers module: joint_position_controller type: JointPositionController joint_name: pan_joint joint_speed: 1.17 motor: id: 1 init: 512 min: 316 max: 708

In this configuration file, we have to mention the servo details, such as ID, initial position,minimum and maximum servo limits, servo moving speed, and joint name. The name of thecontroller is pan_controller, and it's a joint position controller. We are writing onecontroller configuration for ID 1 because we are only using one servo.

The servo parameters configuration fileThe servo_param.yaml file contains the configuration of the pan_controller, such asthe limits of the controller and step distance of each movement; also, it has screenparameters such as the maximum resolution of the camera image and offset from the centerof the image. The offset is used to define an area around the actual center of the image:

servomaxx: 0.5 #max degree servo horizontal (x) can turn servomin: -0.5 # Min degree servo horizontal (x) can turn screenmaxx: 640 #max screen horizontal (x)resolution center_offset: 50 #offset pixels from actual center to right and left step_distancex: 0.01 #x servo rotation steps

2.5.9 人脸跟踪控制器节点

我们都知道,人脸跟踪控制器节点根据人脸中心位置控制 Dynamixel 伺服舵机器。该

节点的代码参考 face_tracker_control/src/face_tracker_controller.cpp。这段代码中包含主要的 ROS 头文件如下。在这里,头文件 Float64 用于将位置信息传

给控制器:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 74 ]

The face tracker controller nodeAs we've already seen, the face tracker controller node is responsible for controlling theDynamixel servo according to the face centroid position. Let's understand the code of thisnode, which is placed atface_tracker_control/src/face_tracker_controller.cpp.

The main ROS headers included in this code are as follows. Here, the Float64 header isused to hold the position value message to the controller:

#include "ros/ros.h" #include "std_msgs/Float64.h" #include <iostream>

The following variables hold the parameter values from servo_param.yaml:

int servomaxx, servomin,screenmaxx, center_offset, center_left, center_right; float servo_step_distancex, current_pos_x;

The following message headers of std_msgs::Float64 are for holding the initial andcurrent positions of the controller, respectively. The controller only accepts this messagetype:

std_msgs::Float64 initial_pose; std_msgs::Float64 current_pose;

This is the publisher handler for publishing the position commands to the controller:

ros::Publisher dynamixel_control;

Switching to the main() function of the code, you can see following lines of code. The firstline is the subscriber of /face_centroid, which has the centroid value, and when a valuecomes to the topic, it will call the face_callback() function:

ros::Subscriber number_subscriber = node_obj.subscribe("/face_centroid",10,face_callback);

The following line will initialize the publisher handle in which the values are going to bepublished through the /pan_controller/command topic:

dynamixel_control = node_obj.advertise<std_msgs::Float64> ("/pan_controller/command",10);

包含 servo_param. yaml 参数值的变量如下:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 74 ]

The face tracker controller nodeAs we've already seen, the face tracker controller node is responsible for controlling theDynamixel servo according to the face centroid position. Let's understand the code of thisnode, which is placed atface_tracker_control/src/face_tracker_controller.cpp.

The main ROS headers included in this code are as follows. Here, the Float64 header isused to hold the position value message to the controller:

#include "ros/ros.h" #include "std_msgs/Float64.h" #include <iostream>

The following variables hold the parameter values from servo_param.yaml:

int servomaxx, servomin,screenmaxx, center_offset, center_left, center_right; float servo_step_distancex, current_pos_x;

The following message headers of std_msgs::Float64 are for holding the initial andcurrent positions of the controller, respectively. The controller only accepts this messagetype:

std_msgs::Float64 initial_pose; std_msgs::Float64 current_pose;

This is the publisher handler for publishing the position commands to the controller:

ros::Publisher dynamixel_control;

Switching to the main() function of the code, you can see following lines of code. The firstline is the subscriber of /face_centroid, which has the centroid value, and when a valuecomes to the topic, it will call the face_callback() function:

ros::Subscriber number_subscriber = node_obj.subscribe("/face_centroid",10,face_callback);

The following line will initialize the publisher handle in which the values are going to bepublished through the /pan_controller/command topic:

dynamixel_control = node_obj.advertise<std_msgs::Float64> ("/pan_controller/command",10);

std_msgs::Float64 用于保存控制器的初始和当前位置。而且控制器只接受这种消息

类型:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 74 ]

The face tracker controller nodeAs we've already seen, the face tracker controller node is responsible for controlling theDynamixel servo according to the face centroid position. Let's understand the code of thisnode, which is placed atface_tracker_control/src/face_tracker_controller.cpp.

The main ROS headers included in this code are as follows. Here, the Float64 header isused to hold the position value message to the controller:

#include "ros/ros.h" #include "std_msgs/Float64.h" #include <iostream>

The following variables hold the parameter values from servo_param.yaml:

int servomaxx, servomin,screenmaxx, center_offset, center_left, center_right; float servo_step_distancex, current_pos_x;

The following message headers of std_msgs::Float64 are for holding the initial andcurrent positions of the controller, respectively. The controller only accepts this messagetype:

std_msgs::Float64 initial_pose; std_msgs::Float64 current_pose;

This is the publisher handler for publishing the position commands to the controller:

ros::Publisher dynamixel_control;

Switching to the main() function of the code, you can see following lines of code. The firstline is the subscriber of /face_centroid, which has the centroid value, and when a valuecomes to the topic, it will call the face_callback() function:

ros::Subscriber number_subscriber = node_obj.subscribe("/face_centroid",10,face_callback);

The following line will initialize the publisher handle in which the values are going to bepublished through the /pan_controller/command topic:

dynamixel_control = node_obj.advertise<std_msgs::Float64> ("/pan_controller/command",10);

发布器句柄输出位置命令给控制器的指令如下:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 74 ]

The face tracker controller nodeAs we've already seen, the face tracker controller node is responsible for controlling theDynamixel servo according to the face centroid position. Let's understand the code of thisnode, which is placed atface_tracker_control/src/face_tracker_controller.cpp.

The main ROS headers included in this code are as follows. Here, the Float64 header isused to hold the position value message to the controller:

#include "ros/ros.h" #include "std_msgs/Float64.h" #include <iostream>

The following variables hold the parameter values from servo_param.yaml:

int servomaxx, servomin,screenmaxx, center_offset, center_left, center_right; float servo_step_distancex, current_pos_x;

The following message headers of std_msgs::Float64 are for holding the initial andcurrent positions of the controller, respectively. The controller only accepts this messagetype:

std_msgs::Float64 initial_pose; std_msgs::Float64 current_pose;

This is the publisher handler for publishing the position commands to the controller:

ros::Publisher dynamixel_control;

Switching to the main() function of the code, you can see following lines of code. The firstline is the subscriber of /face_centroid, which has the centroid value, and when a valuecomes to the topic, it will call the face_callback() function:

ros::Subscriber number_subscriber = node_obj.subscribe("/face_centroid",10,face_callback);

The following line will initialize the publisher handle in which the values are going to bepublished through the /pan_controller/command topic:

dynamixel_control = node_obj.advertise<std_msgs::Float64> ("/pan_controller/command",10);

在 main() 函数的代码中。第一行是 /face_centroid 的订阅者,它包括中心参数值和当前

主题上的参数值,接着调用 face_callback() 函数:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 74 ]

The face tracker controller nodeAs we've already seen, the face tracker controller node is responsible for controlling theDynamixel servo according to the face centroid position. Let's understand the code of thisnode, which is placed atface_tracker_control/src/face_tracker_controller.cpp.

The main ROS headers included in this code are as follows. Here, the Float64 header isused to hold the position value message to the controller:

#include "ros/ros.h" #include "std_msgs/Float64.h" #include <iostream>

The following variables hold the parameter values from servo_param.yaml:

int servomaxx, servomin,screenmaxx, center_offset, center_left, center_right; float servo_step_distancex, current_pos_x;

The following message headers of std_msgs::Float64 are for holding the initial andcurrent positions of the controller, respectively. The controller only accepts this messagetype:

std_msgs::Float64 initial_pose; std_msgs::Float64 current_pose;

This is the publisher handler for publishing the position commands to the controller:

ros::Publisher dynamixel_control;

Switching to the main() function of the code, you can see following lines of code. The firstline is the subscriber of /face_centroid, which has the centroid value, and when a valuecomes to the topic, it will call the face_callback() function:

ros::Subscriber number_subscriber = node_obj.subscribe("/face_centroid",10,face_callback);

The following line will initialize the publisher handle in which the values are going to bepublished through the /pan_controller/command topic:

dynamixel_control = node_obj.advertise<std_msgs::Float64> ("/pan_controller/command",10);

Page 26: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

48  第 2 章

下面的代码首先对发布器句柄进行初始化,再把值通过主题 / pan_controller/command输出:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 74 ]

The face tracker controller nodeAs we've already seen, the face tracker controller node is responsible for controlling theDynamixel servo according to the face centroid position. Let's understand the code of thisnode, which is placed atface_tracker_control/src/face_tracker_controller.cpp.

The main ROS headers included in this code are as follows. Here, the Float64 header isused to hold the position value message to the controller:

#include "ros/ros.h" #include "std_msgs/Float64.h" #include <iostream>

The following variables hold the parameter values from servo_param.yaml:

int servomaxx, servomin,screenmaxx, center_offset, center_left, center_right; float servo_step_distancex, current_pos_x;

The following message headers of std_msgs::Float64 are for holding the initial andcurrent positions of the controller, respectively. The controller only accepts this messagetype:

std_msgs::Float64 initial_pose; std_msgs::Float64 current_pose;

This is the publisher handler for publishing the position commands to the controller:

ros::Publisher dynamixel_control;

Switching to the main() function of the code, you can see following lines of code. The firstline is the subscriber of /face_centroid, which has the centroid value, and when a valuecomes to the topic, it will call the face_callback() function:

ros::Subscriber number_subscriber = node_obj.subscribe("/face_centroid",10,face_callback);

The following line will initialize the publisher handle in which the values are going to bepublished through the /pan_controller/command topic:

dynamixel_control = node_obj.advertise<std_msgs::Float64> ("/pan_controller/command",10);

下面的代码为图像实际的中心位置创建了新的范围。这将有助于得到图像的近似中

心点:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 75 ]

The following code creates new limits around the actual center of image. This will behelpful for getting an approximated center point of the image:

center_left = (screenmaxx / 2) - center_offset; center_right = (screenmaxx / 2) + center_offset;

Here is the callback function executed while receiving the centroid value coming throughthe /face_centroid topic. This callback also has the logic for moving the Dynamixel foreach centroid value.

In the first section, the x value in the centroid is checking against center_left, and if it isin the left, it just increments the servo controller position. It will publish the current valueonly if the current position is inside the limit. If it is in the limit, then it will publish thecurrent position to the controller. The logic is the same for the right side: if the face is in theright side of the image, it will decrement the controller position.

When the camera reaches the center of image, it will pause there and do nothing, and that isthe thing we want too. This loop is repeated, and we will get a continuous tracking:

void track_face(int x,int y) { if (x < (center_left)){ current_pos_x += servo_step_distancex; current_pose.data = current_pos_x; if (current_pos_x < servomaxx and current_pos_x > servomin ){ dynamixel_control.publish(current_pose); }

}

else if(x > center_right){ current_pos_x -= servo_step_distancex; current_pose.data = current_pos_x; if (current_pos_x < servomaxx and current_pos_x > servomin ){ dynamixel_control.publish(current_pose); }

}

else if(x > center_left and x < center_right){

; }

}

当主题 /face_centroid 接收到中心位置参数时开始执行回调函数,回调函数根据每一个

中心参数按顺序控制 Dynamixel 伺服舵机转动。

第一步,中心值的 x 值与 center_left 进行对比,如果图像在左边,增加舵机控制器的

位置量。只有当前位置在限定范围时,它才会输出当前值。如果舵机处于限定范围,则会

把当前位置量信息反馈到控制器。在右边的逻辑是相同的,如果人脸在图像的右边,舵机

将减少控制器的位置量。

当摄像头到达图像中心时,它会停下来什么也不做,这也是我们想要的。这是个重复

的循环实现连续跟踪的过程:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 75 ]

The following code creates new limits around the actual center of image. This will behelpful for getting an approximated center point of the image:

center_left = (screenmaxx / 2) - center_offset; center_right = (screenmaxx / 2) + center_offset;

Here is the callback function executed while receiving the centroid value coming throughthe /face_centroid topic. This callback also has the logic for moving the Dynamixel foreach centroid value.

In the first section, the x value in the centroid is checking against center_left, and if it isin the left, it just increments the servo controller position. It will publish the current valueonly if the current position is inside the limit. If it is in the limit, then it will publish thecurrent position to the controller. The logic is the same for the right side: if the face is in theright side of the image, it will decrement the controller position.

When the camera reaches the center of image, it will pause there and do nothing, and that isthe thing we want too. This loop is repeated, and we will get a continuous tracking:

void track_face(int x,int y) { if (x < (center_left)){ current_pos_x += servo_step_distancex; current_pose.data = current_pos_x; if (current_pos_x < servomaxx and current_pos_x > servomin ){ dynamixel_control.publish(current_pose); }

}

else if(x > center_right){ current_pos_x -= servo_step_distancex; current_pose.data = current_pos_x; if (current_pos_x < servomaxx and current_pos_x > servomin ){ dynamixel_control.publish(current_pose); }

}

else if(x > center_left and x < center_right){

; }

}

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 75 ]

The following code creates new limits around the actual center of image. This will behelpful for getting an approximated center point of the image:

center_left = (screenmaxx / 2) - center_offset; center_right = (screenmaxx / 2) + center_offset;

Here is the callback function executed while receiving the centroid value coming throughthe /face_centroid topic. This callback also has the logic for moving the Dynamixel foreach centroid value.

In the first section, the x value in the centroid is checking against center_left, and if it isin the left, it just increments the servo controller position. It will publish the current valueonly if the current position is inside the limit. If it is in the limit, then it will publish thecurrent position to the controller. The logic is the same for the right side: if the face is in theright side of the image, it will decrement the controller position.

When the camera reaches the center of image, it will pause there and do nothing, and that isthe thing we want too. This loop is repeated, and we will get a continuous tracking:

void track_face(int x,int y) { if (x < (center_left)){ current_pos_x += servo_step_distancex; current_pose.data = current_pos_x; if (current_pos_x < servomaxx and current_pos_x > servomin ){ dynamixel_control.publish(current_pose); }

}

else if(x > center_right){ current_pos_x -= servo_step_distancex; current_pose.data = current_pos_x; if (current_pos_x < servomaxx and current_pos_x > servomin ){ dynamixel_control.publish(current_pose); }

}

else if(x > center_left and x < center_right){

; }

}

Page 27: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing
Page 28: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing
Page 29: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing
Page 30: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

52  第 2 章

2.5.14 最终测试

在正确完成之前介绍的所有操作后,使用下面的命令启动这个项目的所有节点,这样

就可以通过 Dynamixel 伺服舵机进行跟踪:

Face Detection and Tracking Using ROS, OpenCV and Dynamixel Servos

[ 80 ]

The final runI hope that you have followed all instructions properly; here is the command to launch allthe nodes for this project and start tracking using Dynamixel:

$ roslaunch face_tracker_pkg start_dynamixel_tracking.launch

You will get the following windows, and it would be good if you could use a photo to testthe tracking, because you will get continuous tracking of the face:

Figure 22: Final face tracking

你会得到图 2-22 所示的窗口,如果能使用照片来测试跟踪,那也不错,程序会不断地

进行人脸跟踪:

左 中 右

图 2-22 最终人脸跟踪

如图 2-22 所示,若照片偏于右侧,终端消息会通知图像在右边,此时控制器会减少位

置值以实现照片居于中心位置。

2.6 问题

● ROS 功能包 usb_cam 的主要功能是什么?

● ROS 功能包 dynamixel_motor 的作用是什么?

● 连接 ROS 和 OpenCV 的功能包是什么?

● face_tracker_pkg 和 face_tracker_control 的区别在哪里?

Page 31: 第2 章 使用 ROS OpenCV 和Dynamixelimages.china-pub.com/ebook8000001-8005000/8004538/ch02.pdf · 在ROS 中完成Dynamixel 伺服舵机的控制和图像处理功能。 ... Installing

使用 ROS、OpenCV 和 Dynamixel 伺服舵机进行人脸检测与跟踪  53

2.7 本章总结

这一章介绍了使用摄像头和 Dynamixel 伺服电机构建一个人脸跟踪器的项目,其中使

用的软件是 ROS 和 OpenCV。最初,我们学习如何配置网络摄像头和数字伺服电机,在配

置完成后,我们尝试构建两个用于跟踪的功能包。第一个包用于人脸检测,第二个包是向

Dynamixel 伺服舵机发送位置命令进行人脸跟踪的控制器。我们讲述了全部功能包中的所有

文件的使用,并进行了最终测试来演示系统能够完整工作。