By Topic

Finding Text in Natural Scenes by Figure-Ground Segmentation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Huiying Shen ; Smith-Kettlewell Eye Res. Inst., San Francisco, CA ; Coughlan, J.

Much past research on finding text in natural scenes uses bottom-up grouping processes to detect candidate text features as a first processing step. While such grouping procedures are a fast and efficient way of extracting the parts of an image that are most likely to contain text, they still suffer from large amounts of false positives that must be pruned out before they can be read by OCR. We argue that a natural framework for pruning out false positive text features is figure-ground segmentation. This process is implemented using a graphical model (i.e. MRF) in which each candidate text feature is represented by a node. Since each node has only two possible states (figure and ground), and since the connectivity of the graphical model is sparse, we can perform rapid inference on the graph using belief propagation. We show promising results on a variety of urban and indoor scene images containing signs, demonstrating the feasibility of the approach

Published in:

Pattern Recognition, 2006. ICPR 2006. 18th International Conference on  (Volume:4 )

Date of Conference:

0-0 0