-
Essay / Framework for Information Related Alignment
Table of ContentsIntroductionDigital Image and InformationEntropy of an ImageEntropy and HistogramHistogram EqualizationHistogram ManipulationsIntroductionAfter discussing the specific problem in Chapter 2, a research study was undertaken in the previous chapters. It is prudent to now discuss methods to solve the denoising problem. To this end, a new approach has been developed using information-related alignment in this chapter. The methodology uses banks of pre and post filters, which arrives at the solution of the approach. However, the entropy minimization and maximization strategy is considered for validation of medical ultrasound image analysis results. Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”? Get an original essayImage and digital informationThe most important function in this approach is an image. From the classical point of view, image is defined as something that can be perceived with visual systems [1, RV K Reddy et al., 2016]. However, image processing concerns several classes of images, with the fact that not all images are perceived in the same way and directly by the human eye. Grayscale digital image is an image that can be modeled based on a discrete domain referring to Ωd= [1,……..m] X [1,….,n] with the discrete range [ 0,….255], which is usually represented by a two-dimensional array [mn] with the n coordinate ranging from [1 ×n]. Information is defined as the knowledge that the user acquires from the image regarding the facts or details on the subject of interest. It is the part of knowledge obtained from an investigation or study. A research paper titled “A Mathematical Theory of Communication” written by Claude Shannon in 1948 was accepted as the birth of a new field called information theory. Shannon used probability theory to model and describe sources of information. Dedicated work has defined the information source as the data produced by a source to be treated as a random variable. However, the measurement of information depends on the no. possible results. If the two given messages are assumed to have length n1 and n2 with s1 and s2 as the number of symbols, then the measure of information is given by: H= n log s= log sn…Eq. (3.1.a) The largest is no. of messages, the greater the amount of information will be. If there is only one possible message for any event, then that event will be considered to have little or no information. Like, log1= 0…eq. (3.1.b) Claude Shannon also defined the entropy of a system as if there were m no. of events given as: e1, e2…..em with probability of occurrence p1, p2…..pm, then the entropy value will be calculated by: H=∑ pi log 1/pi = -∑ pi log pi … Eq . (3.1.c) Where the information of each event is weighted by the probability of its occurrence. Entropy of an Image The Shannon entropy for an image is calculated based on the distribution of gray level values of the image represented by the gray level histogram. and the probability distribution is calculated based on the no. of times each gray value appears in an image divided by the total number. of occurrence. The researchers found that if an image consists of a single intensity, it will have a low entropy value and therefore contain less information. However, if there are more intensities, the image will have a higher entropy value with a large amount of information present [10, J5, JP W Pluim, 2003]. Shannon defines entropy for everythingn-state system like obtaining information from an event is inversely proportional to the probability of occurrence of the event. According to [11, NR Pal, 1991] for an image I of gray values I (x, y) to (x,y) and of size PXQ belonging to the set of gray levels {0,1,… …L-1} , the gray level frequency would be given as:∑_(j=0)^(i-1)▒〖Nj=PQ〗 …….Eq. (3.3.a)And, if P[xi] is the probability of the sequence xi of gray levels of length l then the entropy will be given as-H=1/l ∑p (xi)e1-p(xi) ……. .Eq. (3.3.b) Such an entropy is called global entropy. Thus, the information present in any image is analyzed in terms of entropy, which gives the measure of uncertainty. Entropy and Histogram Histogram, for any image, is a graph on x-axis that shows the number of pixels present in the image with different intensity values of an image. If an image is an 8-bit grayscale image, then there will be 256 possible intensities, with all 256 intensities displayed on the histogram. Similarly for colored images it would be a 3D histogram with three different axes for R, G and B changes. So the entropy of the image is calculated based on 256 quantity levels and it is oriented by the value of NH(X) =-∑_(i=0)^255▒〖pi logpi 〗 …..( 3.3.a)Pi=Ng/Np …..(3.3.b)Where Ng is the number of pixels corresponding to the gray levels and Np is the total number of pixels in the image; Pi is the probability of occurrence of each gray level intensity. Since the information present in the image can be analyzed in terms of entropy, the entropy of an image decreases with the decreasing amount of information contained in the image. Histogram EqualizationThe histogram is a graph that shows the no. of pixels at each intensity value for an 8-bit grayscale image with 256 different possible intensities. Histogram equalization is the statistical distribution of gray levels present in the image. This is a form of contrast enhancement used to increase the overall contrast of images. It adjusts pixel values for better distribution and contrast enhancement. It is used to stretch the histogram of a given image. Histogram Manipulations Histogram equalization is one of the most popular conventional methods for image enhancement. The method redistributes the gray levels of an image's histogram with a significant change in image brightness. This has led to the limitations of conventional methods such as loss of image originality, loss of minute details and over-enhancement. Many researchers have worked on histogram equalization techniques and their manipulations. As researchers show in [5, E1, M. Kaur et al., 2013], there are various manipulations in histogram equalization. As in brightness-preserving bi-histogram equalization (BBHE), the histogram of the input image is divided into two equal parts at point XT such that the two different histograms are generated with two different ranges 0 to XT and XT+1 to XL- 1. Then the two histograms are equalized separately. In dualistic subimage histogram equalization (DSIHE), the method allows the division of input images to develop the subimages with equal area and equal amount of pixels. The brightness of the output image is equal to the average of the subimage area level and its average gray level. Researchers have also claimed the disadvantages of the DSIHE method as it cannot develop any significant effect on image brightness. In the Bi-Histogram Equalization Method with Minimum Mean Brightness Error (MMBEBHE), the same approach as BBHE.