Broken Detection of the Traffic Sign by using the Location Histogram Matching

Broken Detection of the Traffic Sign by using the Location Histogram Matching

Abstract— the paper presents an approach for recognizing the broken area of the traffic signs. The method is based on the Recognition System for Traffic Signs (RSTS). This paper describes an approach to using the location histogram matching for the broken traffic signs recognition, after the general process of the image detection and image categorization. The recognition proceeds by using the SIFT features to adjust the acquired image to a standard position, followed by the histogram bin will be compared the image with reference image, and finally output the location and percents of the broken area.
Keywords - Road image recognition; Traffic sign; Histogram matching; SIFT features; Broken detection.

I. INTRODUCTION
Traffic signs are an important part of any roadway. They are designed to regulate the flow of the vehicles. Give specific information to the traffic participants. Or warn against unexpected road circumstances. Perception and fast interpretation of the road signs is crucial for the driver’s safe. These signs provide critical information to drivers on the roadway. Traffic signs are a safety precaution as well as an informational resource. But sometimes because of the weather or the artificial there are some signs may be broken. Then the sign recognition for the driver will be tough. It will influence the driver’s estimate. Even cause accident. So keeping traffic signs well maintained and easily viewable is an important matter that benefits everyone who is the traffic participants.

We know there are a lot of papers about the traffic sign recognition. For example from the Color-Geometric Model for Traffic Sign Recognition by Zhu et al. [7] to the Traffic Sign Recognition by Division of Characters and Symbols Regions by Lee et al. [12]. Most of them can be divided into two stages. They are the image detection and the image categorization. After that we proposed next stage for detecting the broken area.
The paper describes an approach as the third stage. That initially segments the traffic signs from the background. Use the SIFT features to adjust the image which acquired with scaling or rotation to the standard camera axis orientation, and then blur the image for eliminating the noise parts. Next calculate the histogram of the acquired and reference image. Compare the two images to find the difference. Finally the program output the location and percents of the broken area.And the advantage of this approach is that it can process the data in time when the carrier car is moving as a regular speed. It can modify the recognition accuracy according to the requirement. The part which needs to improve is the SIFT features do not have a satisfied matching success rate in some situation.

An easy way for detecting the broken is compare the traffic sign with reference. The acquired image may be acquired by any camera axis orientation. So we according to the SIFT features transform the image to the reference image position. For locating the broken area we split image by N bins along the x and y axis of the image. Subsequence we can compare the bins which is separately from the acquired image and the reference. When we know that the same number bin has different value. Then this number row or column must have the broken area.
II. METHODOLOGY
A. SIFT Features
In order to obtain the broken information, the most direct way is that adjust the traffic road signs to the standard camera axis orientation, and then compare with reference image. So the initial stage, the method of image warping transformation requires some unique keypoint of the points to make the mapping. Then the SIFT feature was bring in method. The SIFT feature is invariant to the rotation and scale, as well as it is robust to added noise, viewpoint change and the illumination change. It is possible to achieve image registration successfully when large scale change occurs. Then according to the SIFT features, we can match two images.
The SIFT consist of four major stages:
(1) scale-space peak selection
(2) keypoint localization
(3) orientation assignment
(4) keypoint descriptor

(a) (b)
Figure 1. (a) Warping transformed image and (b) captured signature image.

(a) (b)
Figure 2. Two cases (a) without blurring and (b) with blurring

But in general, there is still some limitation about the image which we acquired from the natural. For example, image matching succeeds between day images and between night images. However, under day-and-night illumination change, the method maybe fails. There is a situation is one image general image, but the other is in close-up shows strong relief effect. And due to the viewpoint change, the SIFT find less matches
B. Warping Transformation
Sometime people don’t look straight on at the plane of the traffic sign. the view will like the figure.1 (b).It’s a random position. But because of the method requirement, we use a geometric manipulation to make the traffic sign on the special position. We need it like the reference image at a standard position for our compare. And it maps pixel points on one location on the image to a different location, often performing subpixel interpolation along the way. According to the SIFT feature points, we can compute the actual transforms that relate the different views.
C. Blurring
There are many reasons for smoothing, but it is usually done to reduce noise or camera artifacts. In our case, because the manufacture or some previous processing, most of times, we can’t get the exactly perfect matching sample. So we use the blurring to deal with the mismatching edge of the images and noise. We select the Gaussian filter. It is probably is not the fast but the most useful. Gaussian filter has nice properties, such as having no sharp edges, and thus does not introduce ringing into the filtered image. A Gaussian blur effect is typically generated by convolving an image with a kernel of Gaussian values. . In this paper, we built the kernel. The kernel size was defined by 7×7. And the kernel value was defined by 2. In the figure 2, we can see the different image without blurring have the mismatching edge which has been signed by circle. But the blurring image eliminates that part difference.
D. Calculate 2D Histogram
Histogram is a graphical representation, showing a visual impression of the distribution of data. We know that the original histogram is used to reflect pixels gray level distribution for dealing with the image. The horizontal axis is gray level, and the vertical axis is the numbers of the pixels. But in this paper we want to the histogram to reflect the information about the location of the pixels. So the original image histogram was improved to with the location horizontal axis histogram.
a) Divide Color range
Before we built the histogram, we need divide the color range. To tell program what kind of color pixels could be counted. The color range division is in the RGB model. Due to varying lighting and weather conditions, the division of traffic signs using color information; especially in outdoor images is a significantly challenging task. So the paper attempt to collect every condition with the ultimate aim to make sure the color range is complete. The condition contains the day, the bad light, the high light, the faded, the fog, and the reflection from car. We divide the color range according to some ultimate sample. Then we get the five kinds of the general traffic sign color in RGB model.
Then we divide the color range as the table 1. In the RGB model we limit the red, green and blue tricolor’s value according to the previous situation. And meanwhile limit the absolute difference value. In this way we can express the color range.
TABLE I. THE AVAILABLE RANGE OF EACH COLOR.
The Colors The Range
Red 97