In image processing, an edge is defined as a series of consecutive pixel places where the intensity (gray or color) values abruptly shift. Edges denote the separation of items from their surroundings. They may also indicate changes in subject matter or point of view. Edges can be seen as points where differences in tone or color occur between two adjacent regions of an image.
Edges have many applications in imaging science. They are used in photo manipulation software to mask out portions of an image (for example, to remove unwanted objects from photographs), and in computer vision algorithms to identify specific features in images (such as faces).
There are several ways to detect edges in images. A simple method is to compare the gray level of each pixel with its neighbors. Pixels with higher gray levels than their neighbors will have an edge at that location.
Another method is to apply different filters to the image. For example, a filter called a Canny operator can detect not only lines but also curves in an image. Other common filters include Gaussian filters, which detect peaks in an image, and morphological operators, which find structures inside an image.
Yet another method is to use computer vision techniques. With this approach, one does not directly detect edges but instead tries to match pixels across frames in a video sequence.
In a digital image, edges are large local variations in intensity. An edge is a linked group of pixels that defines a border between two discontinuous sections. Edges are classified into three types: Edges that are horizontal or vertical lines segmented by a background color are called flat edges. Flat edges can be detected using simple thresholding techniques. Edges that are not flat, such as those that form angles with the horizontal or vertical axis are known as curved edges. Curved edges cannot be detected using only one channel of an RGB image.
Edges that appear in images as boundaries between objects and their backgrounds play an important role in both art and science. Image analysts use them to find certain features, while artists use them to convey emotion or tell a story. In science, edges are used by vision scientists to detect things like lights or holes in the darkness. The perception of edges is also crucial for avoiding obstacles while walking or driving.
There are several common sources of edges. Changes in light intensity from one area of the image to another are usually the cause of flat and curved edges. For example, if you photograph a scene with no shadows, then all of the edges will be flat. If there are some shadows, then some of the edges will be curved. Edges can also be caused by changes in surface texture, such as when water flows over a rock.
Edge detection is an image processing technique used to find locations in a digital image that have discontinuities, or sudden changes in image brightness. The picture's borders (or boundaries) are the locations where the brightness of the image fluctuates drastically. Finding these locations can be done using any one of several algorithms. Open source software libraries for edge detection are available for many programming languages.
An example image with edges detected is shown below:
Image courtesy of Wikipedia user FuzzyWuzzyCat.
The concept of edge detection was first proposed by British scientist Charles Francis Edge in 1872. He called his method "the art of detecting abrupt intensity changes in photographic images." Today, this activity is commonly called "edging".
Digital cameras use various techniques to detect edges. These include simple thresholding, where all values above a certain threshold are considered white, and all values below it are black; global thresholding, where all pixels in the image are given the same value; and region-based segmentation, where different regions (e.g., objects) in the image get assigned unique colors and then removed from the background.
Edges also can be found in film photographs by looking at the transitions between light and dark areas on the film.