Systematic methods for descriptive analysis of physician hand movements during surgical or catheter based interventions have not been previously reported. Although such data would provide critical guidance in the design of surgical instruments, there is currently little to no information about the chronological and spatial integration of operator movements and user conditions during catheter based interventions. The essential function of the hand is to provide physical coupling between the cognitive process and the environment, translating intention to action. The ideal surgical instrument is a contiguous extension of the haptic unit (the hand) that enables an expanded range of effector actions and environmental effects. In reality, however, there is an interface between the hand and the surgical instrument creating a barrier that can introduce variable levels of interference and impedance between the cognitive process and the intended task. The objective of this project is to analyze surgical effector outputs as a function of operator psychomotor inputs using synchronized multimodal image data recorded during clinical cases of transcatheter neurological intervention. Research methodology consisted of observations and digital recording (video and/or still photography) of 24 diagnostic cerebral angiograms. These were analyzed to synchronously document physician hand movement, surgical effector output (as represented by the fluoroscopic image monitor), spatial orientation feedback (as represented by rotational movements of the fluoroscopic imaging plane with respect to patient anatomical axes), and to identify user specific conditions. The results are presented in a defined procedure map and user behaviors based upon physician experience (e.g., fellow/trainee versus attending physician). As stated in the Association for the Advancement of Medical Instrumentation HE 75 human factors standards for medical device design, there is a clear need to develop human factors standards for endovascular devices. This research will serve as a guide to this effort. The models and multimodal image database generated by this project will also be used in the development of interactive educational tools to train physicians to perform these advanced applications.