Deception research has consistently shown that accuracy rates tend to be just over fifty percent when accuracy rates are averaged across truthful and deceptive messages and when an equal number of truths and lies are judged. Breaking accuracy rates down by truths and lies, however, leads to a radically different conclusion. Across three studies, a large and consistent veracity effect was evident. Truths are most often correctly identified as honest, but errors predominate when lies are judged. Truth accuracy is substantially greater than chance, but the detection of lies was often significantly below chance. Also, consistent with the veracity effect, altering the truth-lie base rate affected accuracy. Accuracy was a positive linear function of the ratio of truthful messages to total messages. The results show that this veracity effect stems from a truth-bias, and suggest that the single best predictor of detection accuracy may be the veracity of message being judged. The internal consistency and parallelism of overall accuracy scores are also questioned. These findings challenge widely held conclusions about human accuracy in deception detection.