How to measure the quality of laboratory testing has long been a challenging problem for laboratory managers and accrediting agencies. Traditionally, laboratory quality has been assessed by direct inspection, proficiency testing, and the credentials of staff. None of these methods is entirely satisfactory at answering a fundamental question: does the laboratory give technically accurate and clinically meaningful information for each patient that it tests? This paper discusses how information on patient outcomes can be used to screen for laboratories that may be making frequent random or systematic errors. This approach is called downstream event monitoring (DEM). The basic idea is to look at what happens to a laboratory's patients in a critical window of time after they have been tested. The approach carries out a basic adage of quality management: follow up with your customers to see if your product has met their needs. The main idea of DEM is that if a laboratory has not conveyed accurate information, the clinician may take actions that fail to help, or maybe even harm, the patient. If a laboratory's patients have an unusually high rate of adverse events that happen within a window of time when the laboratory test would have played a critical role, the laboratory should be further examined to see if it is the cause of the problem. Right now, DEM is a technique under development. It needs a clinical logic to relate a patient's outcomes back to a laboratory test, and it needs good data to compare laboratories. This paper discusses how the prothrombin time test and the serum digoxin test have been examined for Medicare patients to see if certain laboratory characteristics are associated with unusually high occurrences of adverse events after testing. The need for future validation studies is also discussed.