Jump to content

User:Besiix

From Wikipedia, the free encyclopedia

Line Labelling is a topic in computer vision which is discussed in CVonline [1]. The line labelling method aims to aid the surface interpretation of objects constructed from a line image. This process classifies the geometric properties of a line as concave, convex or occluding. This method can be applied to a preprocessed two-dimensional image in which edge detection has occurred. This is the first of two steps in surface detection. After lines have been labelled, the surfaces can then be realized, and the three-dimensional objects in the scene can be identified.

Introduction

[edit]
A Necker Cube
Figure 1: Necker Cube ambiguity.

Line labelling is the process of identifying the three-dimensional geometric properties of lines in a two-dimensional line image. Humans are extremely adept at identifying three-dimensional geometric structures from a sparse collection of lines. Given an image of a three-dimensional environment, humans are able to identify the edges of structures and determine the orientation of the structure. This process is not infallible, and can lead to incorrect interpretations if there is more than one possibility. Where there is more than one possibility, humans often conclude their interpretations as ambiguous indicating an exhaustive exploration approach. Figure 1 depicts The Necker cube illusion, an example in which human geometric inferencing can lead to ambiguity.

Assumptions

[edit]

Below, we use the assumptions discussed by Huffman [2] to simplify and disambiguate the labelling problem. Huffman restricted the object space to a "polyhederal world", to illiminate the complexities of apparent contour edges. It can also be assumed here that no two edges have lined up due to the viewing angle, and that the three-dimensional objects have limited complexity, at most three edges meet at a vertex. There are many discontinuities that can lead to viewpoint-independent edges, that is to say, the edge's location and geometry are distorted by the view point. Lighting can cause reflectance discontinuity and illumination discontinuity that can distort the two-dimensional geometries, and even occlude edges. Depth and orientation discontinuities can occur as a result of a viewpoint. In this article, it can be assumed that there are no reflectance or illumination discontinuities.

Theory

[edit]

The line labelling algorithm aims to classify each of the lines in the image as either a concave, convex or occluding edge. Lines are labelled by observing the junctions in which multiple lines connect. The possible combinations for a line are restricted by the type of junctions the line connects to at either end of the line. The line must be labelled the same at both junctions.

Edge Labels

[edit]
L-junction
L-junctions
T-junction
T-junctions
Figure 2: Plausibly labelled junctions
Y-junction
Y-junctions
Arrow-junction
Arrow-junctions
Figure 3: Plausibly labelled junctions
  • Occluding edges: Are labelled with an arrow '>' which is directed with the visible surface on the right hand side of the directed arrow.
  • Convex edges: Are labelled with a '+' symbol
  • Concave edges: Are labelled with a '-' symbol
  • Limbs or apparent contours: Continuous surface-normal discontinuities occur on curved surfaces. The three-dimensional location varies, depending on the viewing angle. These are labelled with a double arrow '>>'

Junction Types

[edit]

There are several angles in which, from the two-dimensional view of the image, surfaces can join at a vertex. As it is assumed that at most three edges can meet at one vertex there are only four possible types of junctions at these vertices; L, T, Y and arrow junctions. At each junction, each edge has a possible of 4 different labels. This creates a total of 43 distinct labelling schemes. The number of labelling schemes can be reduced drastically as the majority are not physically possible in the real world. For example, a junction at which three edges join indicates a minimum of two visible surfaces, and thus all three edges cannot be labelled as occluding. The possible junction types and the number of permissible edges is defined below:

  • L-junction:

This is where two edges meet at any angle. L-junctions only occur at occluded edges. As such, one, or both will be an occluded edge. If both are occluding edges, the arrows must point in the same direction as a surface is only possible on either side of the edge. There are a total of 6 labelling schemes for an L-junction.

  • T-junction:

Where three edges meet with a resultant angle of exactly 180° is known as a T-junction. T-junctions occur when one object in an image occludes another or the object occludes part of itself, see the example for details. As such, the two edges that are parallel must face the same direction to indicate a continuous surface on one side of the line. There are a total of 4 labelling schemes for a T-junction.

  • Y-junction:

If the resultant angle between the three edges is more than 180° then the junction is of a Y-shape. There are a total of 5 labelling schemes for a Y-junction.

  • Arrow-junction:

If the resultant angle between the three edges is less than 180° then the junction is of an arrow-shape. There are a total of 3 labelling schemes for an Arrow-junction.

For basic parabolic curves, the same labels can be applied. The gradient at the point of connection with the junction can be used to calculate the angle to the join, thus determining what geometric properties of the curve. More complex curves more complex and require additional computation in order to label the curves transitions at the maximums, minimums and point of inflections.

Algorithms

[edit]
[edit]
  1. For each vertex
    1. Select vertex v from set {v1,...,vn}
    2. If there are only two edges, v is an L-Junction
    3. Else
      1. Calculate the angle between the three edges
      2. Classify as either Y,T or Arrow-junction
    4. For each edge e connected at v
      1. If e has been assigned a label reduce, permissible label space for v
    5. If there are no permissible labels at vertex v, backtrack to previous vertex and re-assign
    6. Else assign junction label scheme l to v
    7. Assign each e in v according to scheme l
    8. Remove label scheme l from L possible schemes for v


The method in which each vertex is labelled can be represented as a tree search. The algorithm must start with an outside vertex to be labelled as occluding the background. Occluding edges place greater restrictions on neighbouring junctions. The algorithm traverses through the search tree, selecting the next vertex and assigning the lines that meet at this vertex with on of the remaining permissible labelling schemes. The number of acceptable labelling schemes at each vertex is reduced as the number of lines connected to it are labelled. If there are no possible schemes left, the algorithm backtracks to a previous vertex and explores an alternate scheme. This continues until we reach a solution or there are no more possibilities. In this situation, the visible line image is deemed an invalid physical object. In the case of an ambiguous object, the first solution may not be the correct interpretation and as such could continue the tree search in an exhaustive manner to find all valid interpretations.

Probabilistic Approach

[edit]

A more efficient method is to apply a probabilistic function to assign a probability or weight to each scheme in a given set of labelling schemes, determined by the likelihood that the scheme will be correct for the given vertex. The chosen probabilistic function is used to adjust the probability for each scheme with regards to the neighbouring line features. Let as take:

  • V to be the set {v1,...,vn} of n vectors to be labelled.
  • L to be the set {l1,...,lm} of m possible labels for the features.

Let Pi(lk) be the probability that label lk is the correct label for vector vi. The probability axioms can then be applied:

  • 0 ≤ Pi(lk) ≤ 1
  • Pi(lk) = 0 implies that label lk is impossible and Pi(lk) = 1 implies the label is certain
  • The set of labels for each vertex vi are mutually exclusive and exhaustive. Thus:

Pi(lk) = 1

Each label scheme is initialised with an equal probability. To further increase the accuracy of the algorithm, the initial probabilities can be weighted specifically if prior knowledge of the scene is provided. These probabilities are updated at each new assignment of edge labelling. As edges are labelled, probabilities of labelling schemes at neighbouring vertices are modified in respect to their likelihoods from the updated restrictions.

Application

[edit]

Line labelling is an important aspect of vision and object recognition. It is used to identify an object's geometry in robotic manufacturing plants. Robotic arms in many different applications require a digital form of identifying the geometry of an object in order to correctly interact with in in the real world. Line labelling is the stepping stones to many more complex vision systems such as object recognition, path planning and scene recognition.

Example

[edit]
A Example
Figure 4: Example line labelling.

This example works through the labelling of the image in figure 4. The vertices were listed in a set of vertices V, starting with the outside vertices and going clockwise around the edge, spiralling inwards. Starting from an outside line helps reduce the amount of backtracking as L-junctions contain occluding edges, which in turn indicate which side the surface will lie. To aid the speed of this example, we will assume the algorithm chose the correct labelling schemes first time, unless stated otherwise.

  1. Starting at the left bottom vertex (labelled in red), an L-junction was found. This reduces the permissible labellings to one or both edges as occluding, with the surface on the right side.
  2. Using the edge from the first vertex, we are able to minimise the search space. As the surface is on the right hand side (as indicated by the top red arrow found in the first iteration), the space is further reduced to either a concave or convex edge on the inside (green).
  3. The algorithm continues to label the edges, following the outside edge for the next 4 iterations (orange, blue, cyan and pink).
  4. The next vertex is determined as a T-junction. As stated above, T-junctions occur when an object is occluding another object, or itself. As such, the parallel edge must continue the same direction as the known (pink) edge. For this step, lets assume the algorithm incorrectly labelled the third edge (non-parallel) as occluded with the arrow point towards the junction.
    1. The next vertex is labelled as an L-junction with a convex edge. This, so far is correct.
    2. The next iteration will find no possible solution for an arrow-junction with one internally occluded edge and one concave edge, and so must back track to step (4). At this point, the edge incorrectly labelled in step (4) will be correctly classified and the algorithm will continue.
  5. One the outside vertices are labelled, the inner vertices will have a much reduced search space due to neighbouring edges being previously constrained.
  6. The algorithm terminates when the last vertex is visited and given a labelling scheme.

References

[edit]
  1. ^ R. B. Fisher, "CVonline: an overview", Int. Assoc. of Pat. Recog. Newsletter, 27(2), April 2005.
  2. ^ D. A. Huffman, "Impossible objects as nonsense sentences", Machine Intelligence, 8:475-492, 1971.
  1. M. B. Clowes. On seeing things. Artificial Intelligence, 2:79-116, 1971.
  2. Bruno Ernst. The Eye Beguiled: Optical Illusions. Benedikt Taschen Verlag GmbH, 1992.
  3. D. A. Huffman. Impossible objects as nonsense sentences. Machine Intelligence, 8:475-492, 1971.
  4. J. Malik. Interpreting line drawings of curved objects. International Journal of Computer Vision, 1:73-103, 1987.
  5. Vishvjit S. Nalwa. A Guided Tour of Computer Vision. Addison-Wesley, 1993.
  6. Pietro Parodi. The complexity of understanding line drawings of origami scenes. International Journal of Computer Vision, 18(2):139-170, 1996.
  7. K. Sugihara. Mathematical structures of line drawings of polyhedrons. IEEE Transactions on Pattern Analysis and Machine Intelligence, 4(5):458-469, 1982.
  8. D. Waltz. Understanding line-drawings of scenes with shadows. Artificial Intelligence, 2:79-116, 1971.
  9. T. Regier. Line labeling and Junction Labeling: A Coupled System for Image Interpretation. International Computer Science Institute, Berkeley, (415)642-4274, 1947.


[edit]

Category:Image processing Category:Artificial intelligence