bestbarcoder.com

= 1 P Y in Java Integrated PDF 417 in Java = 1 P Y

= 1 P Y using barcode development for none control to generate, create none image in none applications. ASP.NET Web Application Framework (3.14). where we have used the equivalent specification of the decision rule in terms of the decision regions it defines. We denote by Pc i = P Y i Hi , the conditional probability of correct decision, given Hi . If the prior probabilities are known, then we can define the (average) error probability as.

Pe = i Pe i (3.15). Similarly, the average probability of a correct decision is given by Pc = i Pc i = 1 Pe (3.16). Example 3.2.3 The conditional error probabilities for the sensible decision rule (3.

13) for the basic Gaussian example (Example 3.2.2) are m m Pe 0 = P Y > H =Q 2 0 2v since Y N 0 v2 under H0 , and Pe 1 = P Y m H = 2 1.

m v m 2v since Y N m none for none v2 under H1 . Furthermore, since Pe 1 = Pe 0 , the average error probability is also given by m Pe = Q 2v regardless of the prior probabilities..

Notation Let u s denote by arg max the argument of the maximum. That is, for a function f x with maximum occurring at x0 , we have max f x = f x0. arg max f x = x0 3.2 Hypothesis testing basics Maximum likelihood decision rule sion rule is defined as ML 1 i M The maximum li none none kelihood (ML) deci(3.17). y = arg max p y i = arg max log p y i 1 i M The ML rule ch ooses the hypothesis for which the conditional density of the observation is maximized. In rather general settings, it can be proven to be asymptotically optimal as the quality of the observation improves (e.g.

, as the number of samples gets large, or the signal-to-noise ratio gets large). It can be checked that the sensible rule in Example 3.2.

2 is the ML rule for the basic Gaussian example. Another popular decision rule is the minimum probability of error (MPE) rule, which seeks to minimize the average probability of error. It is assumed that the prior probabilities i are known.

We now derive the form of the MPE decision rule. Derivation of MPE rule Consider the equivalent problem of maximizing the probability of a correct decision. For a decision rule corresponding to decision regions i , the conditional probabilities of making a correct decision are given by Pc i =.

p y i dy and the average probability of a correct decision is given by Pc = i Pc i = p y i dy Now, pick a po int y . If we see Y = y and decide Hi (i.e.

, y i ), the contribution to the integrand in the expression for Pc is i p y i . Thus, to maximize the contribution to Pc for that potential observation value y, we should put y i such that i p y i is the largest. Doing this for each possible y leads to the MPE decision rule.

We summarize and state this as a theorem below. Theorem 3.2.

1 (MPE decision rule) For M-ary hypothesis testing, the MPE rule is given by. y = arg max 1 i M i p y i = arg max log 1 i M i + log p y i (3.18). A number of im none none portant observations related to the characterization of the MPE rule are now stated below. Remark 3.2.

1 (MPE rule maximizes posterior probabilities) By Bayes rule, the conditional probability of hypothesis Hi given the observation is Y = y is given by ipyi P Hi y = py. Demodulation where p y is t none for none he unconditional density of Y , given by p y = j j p y j . The MPE rule (3.18) is therefore equivalent to the maximum a posteriori probability (MAP) rule, as follows:.

y = arg max P Hi y 1 i M (3.19). This has a nic none none e intuitive interpretation: the error probability is minimized by choosing the hypothesis that is most likely, given the observation.. Remark 3.2.2 ( ML rule is MPE for equal priors) By setting i = 1/M in the MPE rule (3.

18), we see that it specializes to the ML rule (3.17). For example, the rule in Example 3.

2.2 minimizes the error probability in the basic Gaussian example, if 0 and 1 are equally likely to be sent. While the ML rule minimizes the error probability for equal priors, it may also be used as a matter of convenience when the hypotheses are not equally likely.

We now introduce the notion of a likelihood ratio, a fundamental notion in hypothesis testing. Likelihood ratio test for binary hypothesis testing For binary hypothesis testing, the MPE rule specializes to 1py1 > 0py0 1 (3.20) 0 1py1 < 0py0 MPE y = don t care 1py1 = 0py0 which can be rewritten as H1 py1 > Ly = py0 < H0 0 1.

(3.21). where L y is c alled the likelihood ratio (LR). A test that compares the likelihood ratio with a threshold is called a likelihood ratio test (LRT). We have just shown that the MPE rule is an LRT with threshold 0 1.

Similarly, the ML rule is an LRT with threshold one. Often, it is convenient (and equivalent) to employ the log likelihood ratio test (LLRT), which consists of comparing log L y with a threshold..

Example 3.2.4 none none (Likelihood ratio for the basic Gaussian example) Substituting (3.

12) into (3.21), we obtain the likelihood ratio for the basic Gaussian example as L y = exp 1 m2 my v2 2 (3.22).

Copyright © bestbarcoder.com . All rights reserved.