Edition 2nd Edition. First Published Imprint CRC Press. Pages pages. Export Citation. Besides, a low-pass filter is applied to the edge pixels in the channel of Value before the sharpening process so that salt-and-pepper noises can be suppressed. Finally, the adjusted channel of Value is integrated with that of Hue and Saturation to get the sharpened color image in [ 10 ].
In certain cases, the sharpening of an image is carried out in conjunction with image contrast enhancement [ 11 — 19 ]. In such kind of approaches, the statistics of the image to be sharpened is first obtained, and then the equalization of histogram will be performed. In [ 11 ], the Laplace filter is first applied so that the strength of the discontinuity in the image to be processed can be evaluated. After that, a Laplace filter is used again to highlight discontinuity with smaller strength while a Gaussian filter is applied to suppress discontinuity with larger strength.
Finally, the contrast will be enhanced by using an adaptive histogram equalization approach to get a better visual perceptual quality [ 11 ]. In [ 12 ], an edge-weighted contrast enhancement algorithm is proposed. The image to be enhanced is first convolved with a median filter to get a low-pass filtered image. Meanwhile, the original image is also processed by a weighted threshold histogram equalization WTHE approach to get an rudimentary enhanced image.
Finally, the Sobel operator is applied to the original image to be enhanced to get a couple of weights for the low-passed filtered image as well as the rudimentary enhanced image so that the two images can be merged together to get the final enhanced image. Though histogram equalization HE has been proved to be simple and effective in contrast enhancement, however, it tends to change the mean brightness of the image to the middle level of the permitted range and hence is not very suitable for consumer electronic products, where preserving the original brightness is essential to avoid unnatural look and visual artifacts [ 13 ].
To conquer this problem, a brightness preserving histogram equalization with maximum entropy BPHEME approach is proposed in [ 13 ]. The BPHEME tries to find the target histogram that maximizes the entropy under the constraints that the mean brightness is fixed so that not only the image can be enhanced, but also the original brightness can be preserved as well [ 13 ]. Based on the analysis of many of the researches for contrast enhancement, the use of an intensity distribution of the whole image is the major cause of visual artifacts in conventional histogram equalization.
Therefore, some of the researches propose the use of a so-called subregion or subimage histogram equalization [ 14 — 19 ]. Instead of using the conventional histogram equalization, a method called subregions histogram equalization is proposed for contrast enhancement in [ 14 ]. The image to be enhanced is first convolved with a Gaussian filter to get smoothed intensity values, and then the transformation function is applied for histogram equalization [ 14 ].
With the process of convolving with the Gaussian filter, the transformation function used is not based on the intensity of the pixels only, but the values of the neighboring pixels are also considered [ 14 ]. In [ 15 ], a recursive subimage histogram equalization is developed by iteratively dividing the histogram based on median rather than mean values so that the brightness can be preserved to extend better than previous histogram partitioning methods.
A contrast enhancement method using dynamic range separate histogram equalization DRSHE is proposed in [ 16 ]. The DRSHE first separates the dynamic range of histograms into several parts and resizes the grey scale range of each part based on individual area ratio, and then intensities of histogram in each part are uniformly redistributed in resized grey scale range so that unintended changes in brightness can be suppressed [ 16 ]. In [ 17 ], an edge-preserving contrast enhancement as well as a multihistogram equalization method is proposed. By utilizing the human visual system, the image to be enhanced is decomposed into segments, resulting in an efficient correction of nonuniform illumination.
Additionally, a quantitative measure of image enhancement is also proposed [ 17 ]. In [ 18 ], an adaptive image equalization algorithm is proposed. The histogram distribution is first synthesized by Gaussian mixture model, and the intersection points of the Gaussian components are used to partition the dynamic range of the image into subintervals. The contrast equalized image is generated by transforming the gray levels in each subinterval according to the dominant Gaussian component and the cumulative distribution function of the subinterval with a weight proportional to the variance of the corresponding Gaussian component.
The algorithm is free of parameter setting for a given dynamic range of the enhanced image [ 18 ]. Recently, a fuzzy logic-based histogram equalization FHE is proposed for intensity transformation and spatial filtering in [ 19 ]. The fuzzy histogram is first computed based on fuzzy set theory, and then the fuzzy histogram is divided into two subhistograms based on the median value of the original image. Finally, the two subhistograms are equalized independently to get a brightness preserved and contrast enhanced image.
In these histogram-equalized approaches, all the pixels are adjusted, and the intensity as well as the characteristics of the original image changes [ 11 — 19 ]. To do this, we propose a model-based and three-pass algorithm for the sharpening of grey-scale images. To highlight the transitions or discontinuities in intensity, pixels around edges or boundaries should be adjusted; that is, an additive magnitude should be imposed on those edge pixels.
In general, a larger additive in magnitude can have a better sharpening result; however a larger additive in magnitude can also lead to the saturation of intensity around edge pixels. Aiming to find the maximal additive magnitude automatically for the images to be sharpened, we proposed in this paper the use of a Grey prediction model GM 1,1 so that the condition of over-sharpening or intensity saturation can be avoided.
The Grey prediction model is applied in a variety of fields for its ability of generating predicted value of a sequence under limited information or sampled values [ 20 , 21 ]. With this characteristic, some of the researches apply the Grey prediction model for the edge detection of an image so that the discontinuity or change in intensity can be highlighted [ 22 — 24 ].
During the second pass, pixels around edges or boundaries are picked out with an edge detection mechanism, for example, the well known Canny [ 25 ] and Sobel edge detector [ 1 ], can be used for this purpose. In this paper, the Canny operator as well as our previously proposed horizontal and vertical differentiator called HVD for short , which performs difference operation between consecutive pixels in [ 10 ], will be used for the detection of an edge. In the third pass, the intensity of those pixels detected as around an edge or boundary are adjusted with an increment or decrement based on our previously proposed local-adapted strategy [ 10 ] for the purpose of image sharpening, and that of those nonedge pixels are kept unaltered.
With the proposed approach, most of the original information contained in the image can be retained. For this, we first calculate the average intensity of a small local area around the pixel and then check to see if the intensity of the edge pixel is greater than the average intensity value.
If so, an increment is added, otherwise a decrement should be added. Finally, a scaling factor can also be used for the adjustment of the additive magnitude in the proposed approach. As we will see in the experiments, the proposed approach can have a very distinct intensity transition for pixels around edges in the sharpened images, which demonstrates the usefulness of the proposed approach.
The rest of the paper is organized as follows. Section 2 gives an overview of the Grey prediction algorithm.
- Stuart Perry > Compare Discount Book Prices & Save up to 90% > nyrideti.tk.
- Drug-Induced Infertility and Sexual Dysfunction.
- by Kim-Hui Yap , Ling Guan , Stuart William Perry , Hau San Wong?
- A Computational Intelligence Perspective, Second Edition, 2nd Edition?
- Girl in the Dark: A Memoir.
- Hau San Wong's Documents - nyrideti.tk.
The proposed algorithm is introduced in Section 3. Extensive experiments on the proposed method are given in Section 4. A conclusion is given in Section 5. Due to its successful applications in a variety of fields, for example, image processing, statistics, medical, military, and business management, the Grey theory, especially the Grey prediction method, has attained more and more attention recently [ 20 , 21 ].
Unlike most of the conventional prediction mechanisms, the data samples that are to be used as the inputs of a prediction do not have to be equally spaced or sampled in Grey prediction model. Moreover, a very good prediction result can be obtained under limited data samples or information with the use of a Grey prediction mechanism [ 20 , 21 ]. By performing a so-called accumulated generating process, an irregular data sequence can be made regular with the use of a GM 1,1 prediction model. The accumulated data sequence with regular property is usually referred to as the Grey generated sequence.
After this process, the Grey generated sequence can be used for the modeling or prediction of future data samples. To utilize the GM 1,1 Grey prediction mechanism based on previously sampled data, we define the observation, that is, the previously sampled data sequence, x 0 as. With the sampled data sequence x 0 , the Grey prediction modeling can be carried out through the following steps [ 20 , 21 ]. The first step is to perform the so-called accumulated generating operation AGO of the original sequence to get the accumulated generating sequence x 1 as below:.
The second step is to calculate the mean sequence z 1 k from x 1 k. This step, four intermediate parameters C , D , E , and F are calculated as below [ 20 , 21 ]:. In step four, the so-called developing coefficient a and Grey input b are calculated as below [ 20 , 21 ]:. With the above parameters, the GM 1,1 prediction can then be modeled by the following:. In this section, the proposed three-pass image sharpening algorithm is to be introduced.
Finally, the edge sharpening algorithm will be explained [ 10 ]. For this, a commonly used Grey prediction model GM 1,1 will be applied for this purpose [ 20 , 21 ]. The four sample data values of the original sequence x 0 are then used to predict the fifth sample x 0 5 in the sequence, which describes the trend of intensity distribution of the image to be sharpened as illustrated in Figure 1.
Recall that the purpose of image sharpening is to highlight the discontinuities. For this, an increment or decrement should be added with the original intensity for pixels around boundaries. A larger additive value usually can have a better sharpening result. However it can also lead to the intensity saturation or the effect of over-sharpening of edge pixels.
Image Processing Books - 程序园
Therefore, the determination of an appropriate or maximal additive value has become an important step during the sharpening procedure. In the second pass, pixels around edges are to be picked out. The commonly used Canny and Sobel operator can be applied for this purpose. Nevertheless, in this paper we apply a very simple algorithm of our previous work in [ 10 ] for discontinuity detection by calculating the horizontal and vertical intensity difference of the pixel to be sharpened, that is, the horizontal and vertical differentiator HVD [ 10 ].
By using the HVD, the discontinuity around a pixel x can be easily detected by examining the intensity difference between x , x W and x , x N in Figure 2. We then determine if the pixel x is around an edge by checking if the first condition of the following equation holds [ 10 ]:.
In this pass, however, some of the isolated pixels can also be detected as around an edge with the proposed HVD edge detector [ 10 ]. Leondes Cornelius T. Leondes X Joseph D. Leondes X Cornelius T. Leondes Patricia J. Lancaster Ben C. Deliyannis J. Kel Fidler G. Thomas Mase George E. Dhillon Richard A. Matzner Dipak K. Williams Vijay K. Lubarda Leonard L. Grigsby M. Dorf Michael J. Blackwell Jerry C. Manassah Bilal M.
Dorf B. Dhillon Louis J. Goodman X Rufino S.
Ignacio X David H. Liu Sarfaraz K. Niazi A. Ramachandra Rao Kenneth L. Carper Randall K. Noon Paul B.
David N. Poularikas Jacques W. Delleur Jan F. Weber Marvin J.
Adaptive Image Processing
Weber Andrei D. Polyanin Frank E. Hall William A. Hansen Chang H. Hartemann Bruce E. Larock Yun Q. Shi John C.
Russ X John C. Yen William S. Komarovsky Timothy L. Skvarenina C. Sankaran Fred I. Denny Jerry C. Jain Suthan S. Suthersan Jerry C. Whitaker Michael J. Golio X Jerry C.