Purpose. To develop and evaluate a model for assessing information retrieval and application skills, and to compare the performances on the assessment exercises of students who were and were not instructed in these skills. Method. The authors developed a set of four examination stations, each with multiple subtasks, and administered the exams to students at two medical schools. Students at one school had intensive instruction in literature searching and filtering skills for information quality (instructed group), and those at the other school had minimal instruction in these areas (uninstructed group). The stations addressed pediatrics content and the skills of searching Medline and the World Wide Web, evaluating research articles, evaluating the accuracy of information from the Web, and using the information to make recommendations to patients. The authors determined the psychometric characteristics of the stations and compared the performances of the two groups of students. Results. Students in the instructed group performed significantly better and with less variability than the uninstructed group on four tasks and no differently on seven tasks. There was no task on which the uninstructed group performed significantly better than the instructed group. Conclusion. The prototype stations showed predictable differences across curricula, indicating that they have promise as assessment tools for the essential skills of information retrieval and application.