\documentclass[12pt]{article} \addtolength{\textheight}{2.5in} \addtolength{\topmargin}{-0.75in} \addtolength{\textwidth}{1.0in} \addtolength{\evensidemargin}{-0.5in} \addtolength{\oddsidemargin}{-0.5in} \setlength{\parskip}{0.1in} \setlength{\parindent}{0.0in} \newcommand{\given}{\, | \,} \pagestyle{empty} \raggedbottom \begin{document} \begin{flushleft} Prof.~David Draper \\ Department of Statistics \\ University of California, Santa Cruz \end{flushleft} \begin{center} \textbf{\large STAT 131: Quiz 3} \textit{[15 total points]} \end{center} \bigskip \begin{flushleft} Name: \underline{\hspace*{5.85in}} \end{flushleft} Bayes's Theorem is the only known approach to learning from data that satisfies two important properties: (a) it's logically internally consistent (meaning that it cannot produce contradictory conclusions such as \{An unknown quantity $\theta$ of interest to me cannot be negative, but Bayes's Theorem says that my best estimate of $\theta$ is $-2.3$\}, and (b) it combines information external and internal to your dataset in such a way that no extraneous information is inadvertently smuggled into your answer. However, it's possible to use Bayes's Theorem in a way that defeats its ability to help you learn from the world around you. The result we'll explore below in this quiz that illustrates this was called \textit{Cromwell's Rule} by the great British Bayesian statistician Dennis Lindley (1923--2013). Let $U$ be a true-false proposition whose truth status is Unknown to you, and let $D$ be another true-false proposition (representing Data) whose truth status is known to you; an example would be $U =$ (person $P$ really is infected with COVID--19) and $D =$ (this PCR screening test \textit{says} that person $P$ is infected with COVID--19) (note that $U$ and $D$ are not the same; the PCR test could be wrong). Recall that Bayes's Theorem in this situation says, assuming that $P ( D ) > 0$, that \begin{equation} \label{bayes-1} P ( U \given D ) = \frac{ P ( U ) \, P ( D \given U ) }{ P ( D ) } \, , \end{equation} in which $P ( U )$ is your \textit{prior} information about the truth of $U$ (in the example above, this would be the prevalence of COVID--19 among people similar to person $P$ in all relevant ways). \begin{itemize} \item[(a)] Show that if you assume that $P ( U ) = 0$, then you would have to conclude that $P ( U \given D ) = 0$, no matter how the data information $D$ came out \textit{[5 points]}. \vspace*{1.25in} \item[(b)] Show that if you assume that $P ( U ) = 1$, then you would have to conclude that $P ( U \given D ) = 1$, no matter how the data information $D$ came out \textit{[5 points]}. \newpage \item[(c)] Briefly explain in what sense your results in (a) and (b) imply that \begin{quote} \textit{Putting prior probability 0 or 1 on anything renders Bayes's Theorem incapable of helping you learn from data.} \end{quote} What practical conclusion should we draw about the assignment of prior probabilities? Explain briefly \textit{[5 points]}. \vspace*{0.7in} \end{itemize} \end{document}