Experimental psychology research commonly has participants respond to stimuli by pressing buttons or keys. Standard computer input devices constrain the range of motoric responses participants can make, even as the field advances theory about the importance of the motor system in cognitive and social information processing. Here we describe an inexpensive way to use an electromyographic (EMG) signal as a computer input device, enabling participants to control a computer by contracting muscles that are not usually used for that purpose, but which may be theoretically relevant. We tested this approach in a study of facial mimicry, a well-documented phenomenon in which viewing emotional faces elicits automatic activation of corresponding muscles in the face of the viewer. Participants viewed happy and angry faces and were instructed to indicate the emotion on each face as quickly as possible by either furrowing their brow or contracting their cheek. The mapping of motor response to judgment was counterbalanced, so that one block of trials required a congruent mapping (contract brow to respond “angry,” cheek to respond “happy”) and the other block required an incongruent mapping (brow for “happy,” cheek for “angry”). EMG sensors placed over the left corrugator supercilii muscle and left zygomaticus major muscle fed readings of muscle activation to a microcontroller, which sent a response to a computer when activation reached a pre-determined threshold. Response times were faster when the motor-response mapping was congruent than when it was incongruent, extending prior studies on facial mimicry. We discuss further applications of the method for research that seeks to expand the range of human-computer interaction beyond the button box.