Michael J. Prietula (PhD, MPH) is Professor in GBS and in the Rollins School of Public Health. Dr. Prietula holds a PhD in Information Systems (minors in Computer Science & Psychology) from the University of Minnesota, and a Master of Public Health (MPH) from the University of Florida. He was an AI research scientist at Honeywell's Aerospace Systems & Research Center, on the faculties of Dartmouth College, Carnegie Mellon University, University of Florida, and was department chair at the Johns Hopkins University with an adjunct appointment in the JHU School of Medicine. He is an External Research Scholar at the Institute for Human and Machine Cognition, which develops pioneering technologies to leverage and extend human capabilities via Artificial Intelligence & Robotics.
He has published such journals as the Management Science, Information Systems Research, MIS Quarterly, Cognitive Science, Harvard Business Review, Journal of Organizational Design, Organization Science, Biosecurity & Bioterrorism, Journal of Experimental & Theoretical Artificial Intelligence, ORSA Journal on Computing, Applied Artificial Intelligence, JMIR mHealth & uHealth, Human Factors, Journal of Economic Behavior & Organization, Journal of Experimental Social Psychology, Journal of Personality & Social Psychology, Computers in Human Behavior, PLoS One, Brain Connectivity, and the Philosophical Transactions of the Royal Society. He has best paper awards from the Hawaii International Conference of Systems Sciences, the International Conference on Global Defense & Business Continuity, the International Conference of Information Systems, and the Academy of Management. His papers were in the top 5 downloaded from Organization Science (2014), the most downloaded paper from Brain Connectivity (2017), and in top 10 downloaded from JMIR mHealth & uHealth (2016-2019). He has edited two books, Computational Organization Theory (with K. Carley) and Simulating Organizations: Computational Models of Institutions and Groups (with K. Carley & L. Gasser). He has been funded by Emory's Global Health Institute, the Centers for Disease Control & Prevention, the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, and the Defense Advanced Research Projects Agency. Michael is a musician, a PADI diving instructor, and was also a stage manager and served on the board of directors for a community theatre in New Hampshire while teaching at Dartmouth.
University of Minnesota: PhD, Information Systems
University of Florida: MPH, Public Health
Areas of Expertise (7)
Human decision making
Computational Modeling of Individuals and Groups
Public Health & Technology
Theatre, Performance & Leadership
Advice in Crisis: Principles of Organizational and Entrepreneurial ResilienceJournal of Organizational Design
S Levine, M Prietula, A Majchrzak
How does (in)accurate information flow in a crisis? When facing a crisis (or preparing for one), managers often turn to peer networks, seeking advice and providing it. Scholars and executives endorse sharing knowledge and experience, especially for boosting resilience and combating crises.They believe such decentralized, peer-to-peer contact suits the ill-structured challenges organizations encounter. Yet, this endorsement overlooks a bias known as the Dunning-Kruger effect: People regularly misjudge their own and their peers’ skills. In this paper, we weave case studies and experimental evidence into a computational model examining the dynamic unfolding of information under varying assumptions, showing how organizational design can ameliorate risks of information biases. We conclude with implications for resilience, research, and practice. This research was funded in part by Goizueta's Summer Research Fund.
Using ADAPT-ITT to Modify a Telephone-Based HIV Prevention Intervention for SMS Delivery: Formative StudyJMIR Formative Research
T Davis T, RJ DiClemente RJ, M Prietula
African American adolescent females are disproportionately affected by sexually transmitted infections (STIs) and HIV. Collaborating with the Rollins School of Public Health, we demonstrated how to engage user-experience (UX) methods to design and adapt a post-intervention, Given the elevated risk of STIs and HIV in African American women, there is an urgent need to identify innovative strategies to enhance the adoption and maintenance of STI and HIV preventive behaviors. Even evidence-based interventions (workshops) lose their efficacy as time passes. Post-intervention phone calls by health educators extend the effectiveness, but the use of that technology is declining for that target population. Texting is now the promising technology for extending the efficacy of the original intervention. However, little guidance in the public health literature is available for developing this type of application. SMS texting platform for health educator contact. Using a representative advisory board, iterations through design revealed critical insight into cultural components, language, and key emergent personas to help cue health educators on their responses. This research was supported by a grant from Emory University’s Global Health Institute and the Goizueta Business School’s Summer Research Fund.
Taking mHealth Forward: Examining the Core CharacteristicsJMIR MHealth and UHealth
T Davis T, RJ DiClemente, M Prietula
This is a review paper of the core characteristics of mobile health (mHealth) in collaboration with the Rollins School of Public Health. We assert that the relevance of these characteristics to mHealth will endure as the technology advances, so an understanding of these characteristics is essential to the design, implementation, and adoption of mHealth-based solutions. The core characteristics we discuss are (1) the penetration or adoption into populations, (2) the availability and form of apps, (3) the availability and form of wireless broadband access to the Internet, and (4) the tethering of the device to individuals. These collectively act to both enable and constrain the provision of population health in general, as well as personalized and precision individual health in particular. This work was funded in part by a grant from Emory University’s Global Health Institute. This was in the top 10 most downloaded articles from this journal in 2017-2019.
Design Principles for Crisis Information SystemsInternational Journal of Information Systems for Crisis Management
C Nikolai, T Johnson, I Becerra-Fernandez, M Prietula, G Madey
Since Hurricane Katrina, research has focused on improving disaster management through the use of specially designed crisis information systems (CIS). However, there are few design principles specific to these dynamic environments. Toward that end, we engaged in a 9-month project studying one of the most respected emergency response organizations in the world -- the Miami-Dade Emergency Operations Center in Miami-Dade County, Florida. Here we report key principles of design that apply to these critical information systems. This work was supported by the University of Notre Dame, Emory University, the U.S. Department of Education (DOE), and the National Science Foundation (NSF).
SimEOC: A Distributed Web-Based Virtual Emergency Operations Center Simulator for Training and ResearchInternational Journal of Information Systems for Crisis Response and Management
C Nikolai, T Johnson, M Prietula, I Becerra-Fernandez, G Madey,
Training is an integral part of disaster preparedness. Practice in dealing with crises improves one’s ability to manage emergency situations. As an emergency escalates, more and more agencies get involved, many of whom would not normally work together. These agencies require critical training to learn how to manage the crisis and to work together across jurisdictional boundaries. In many jurisdictions, training is conducted through discussion-based tabletop and paper-based scenario performance exercises or generic forms of computer-based exercises. In this paper, we describe a socio-technical computer-based training simulator and research tool for upper-level emergency managers. This tool is important because it enables emergency managers to configure the simulation to fit their emergency operations form. This allows training for crises more efficiently and effectively in a virtual environment. It also serves as a research tool for scientists to study emergency management decision-making, infrastructural design, and organizational learning. This research was funded by the National Science Foundation (NSF).
The Conforming Brain and Deontological ResolvePLOS ONE
M Pincus, L LaViers, M Prietula, G Berns
Are your personal identity and sacred values subject to forces of social influence? This is the first paper to discover a neurobiological metric for deontological resolve. Such forces include core religious beliefs and moral norms that constrain decision-making across a person’s lifetime. In many cultures, violating sacred values is tantamount to disavowing group membership, underscoring the importance of sacred values to group identity. But what is going on in our brain when we are faced with choices where our beliefs go against “the group”? Deontological resolve defines how strongly one relies on absolute rules of right and wrong in the representation of one’s personal values, and the willingness to modify/deny one’s values in the presence of social influence. Using fMRI, we found that the relative activity in the ventrolateral prefrontal cortex (VLPFC) during the passive processing of sacred values predicted individual differences in conformity. Individuals with stronger deontological resolve, as measured by greater VLPFC activity, displayed lower levels of conformity. We conclude that unwillingness to conform to others' values is associated with a strong neurobiological representation of social rules. This research was supported by the Office of Naval Research (ONR) and the Defense Advanced Research Projects Agency (DARPA).
Open collaboration for innovation: principles and performanceOrganization Science
S Levine, M Prietula
The principles of open collaboration for innovation (and production), once distinctive to open source software, are now found in many other ventures. Some of these ventures are Internet based: for example, Wikipedia and online communities. Others are off-line: they are found in medicine, science, and everyday life. Such ventures have been affecting traditional firms and may represent a new organizational form. Despite the impact of such ventures, their operating principles and performance are not well understood. Here we define open collaboration (OC), the underlying set of principles, and propose that it is a robust engine for innovation and production. In all instances, participants create goods and services of economic value, they exchange and reuse each other’s work, they labor purposefully with just loose coordination, and they permit anyone to contribute and consume. These principles distinguish OC from other organizational forms, such as firms or cooperatives. We identify and investigate three elements that affect performance: the cooperativeness of participants, the diversity of their needs, and the degree to which the goods are rival (subtractable). Through computational experiments, we find that OC performs well even in seemingly harsh environments: when cooperators are a minority, free riders are present, diversity is lacking, or goods are rival. We conclude that OC is viable and likely to expand into new domains. The findings also inform the discussion on new organizational forms, collaborative and communal. This project was supported by a summer Research grant from the Goizueta Business School, Emory University, and discussions at the Human Social, Culture and Behavior Modeling Program meetings of the Office of Naval Research (ONR).
Short-and long-term effects of a novel on connectivity in the brainBrain Connectivity
G Berns, K Blaine, M Prietula, B Pye
How does reading a novel change your brain? Our paper reveals the first evidence of how reading a novel alters the brain's resting state over time. Novels are stories, and stories are complicated objects of communication. Although several linguistic and literary theories describe what constitutes a story, neurobiological research has just begun to elucidate active brain networks when processing stories. These studies have focused on the immediate response to short stories. We chose a novel over a short story because the length and depth of the novel would afford a set of repeated engagements with associated, unique stimuli (sections of the novel) set in a broader, controlled stimulus context that could be consumed between several fMRI scanning periods. Every participant in the study engaged with the same novel, Pompeii, a captivating 2003 thriller penned by Robert Harris. Pompeii is grounded in the historical eruption of Mount Vesuvius in ancient Italy. It narrates the journey of a main character who, positioned outside the city of Pompeii, observes steam and peculiar occurrences surrounding the volcano. He embarks on a desperate quest to return to Pompeii in an attempt to rescue the woman he cherishes. Concurrently, as the volcano stirs ominously, the city's inhabitants remain oblivious to the impending danger. We identified three independent networks that had significant increases in activity. Two of these networks involved brain regions previously associated with perspective-taking and story comprehension. These hubs corresponded to regions previously associated with perspective-taking and story comprehension, and the changes exhibited a time course that decayed rapidly after the completion of the novel. A third network showed long-term changes in connectivity, which persisted for several days after the reading. This was observed in the bilateral somatosensory cortex, suggesting a potential mechanism for “embodied semantics” to suggest that reading a novel invokes neural activity associated with bodily sensations. This project was funded by the Defense Advanced Research Projects Agency (DARPA). This story has been picked up by a wide range of media outlets and was the most downloaded Brain Connectivity article in 2017.
How knowledge transfer impacts performance: A multilevel model of benefits and liabilitiesOrganization Science
S Levine, M Prietula
When does knowledge transfer benefit performance? Combining field data from a global consulting firm with an agent-based model, we examine how efforts to supplement one’s knowledge from coworkers interact with individual, organizational, and environmental characteristics to impact organizational performance. We find that once cost and interpersonal exchange are included in the analysis, the impact of knowledge transfer is highly contingent. Depending on specific characteristics and circumstances, knowledge transfer can better, matter little to, or even harm performance. Three illustrative studies clarify puzzling past results and offer specific boundary conditions: (1) At the individual level, better organizational support for employee learning diminishes the benefit of knowledge transfer for organizational performance. (2) At the organization level, broader access to organizational memory makes global knowledge transfer less beneficial to performance. (3) When the organizational environment becomes more turbulent, the organizational performance benefits of knowledge transfer decrease. The findings imply that organizations may forgo investments in both organizational memory and knowledge exchange, that wide-ranging knowledge exchange may be unimportant or even harmful for performance, and that organizations operating in turbulent environments may find that investment in knowledge exchange undermines performance rather than enhances it. At a time when practitioners are urged to make investments in facilitating knowledge transfer and collaboration, appreciation of the complex relationship between knowledge transfer and performance will help in reaping benefits while avoiding liabilities. This research was supported, in part, by a Summer Research grant from the Goizueta Business School, Emory University, discussions at the Human Social, Culture and Behavior Modeling Program meetings of the Office of Naval Research (ONR).
The price of your soul: neural evidence for the non-utilitarian representation of sacred valuesPhilosophical Transactions of the Royal Society B
G Berns, E Bell, M Capra, M Prietula, S Moore, B Anderson, J Ginges, S Atran S
How sacred are your values? This is the first paper that provides empirical neurobiological evidence that sacred values affect behaviour by retrieving and processing deontic rules and not through a utilitarian evaluation of costs and benefits. Sacred values, such as those associated with religious, political, or ethnic identity, underlie many important individual and group decisions in life. Individuals typically resist attempts to trade off their sacred values, even for material benefits. However, little is known about the neural representation and processing of sacred values. Our results explain why argument often fails to alter belief. Truly sacred values “short circuit” subsequent choice assessment of conditions and engage regions associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. Philosophical Transactions is the world’s first and longest-running scientific journal. This research was supported by grants from the Air Force Office of Scientific Research (AFOSR) through the Office of Naval Research (ONR). This work has been picked up by a wide range of media outlets.
Negotiation Offers and the Search for AgreementNegotiation and Conflict Management Research
M Prietula, L Weingart
A key component of negotiation dynamics is the search for mutually beneficial agreements, and offer exchange is a key element of that process. Rooted in the tradition of information processing psychology, we develop a theoretical model that conceives of negotiation as the collaborative search of a complex offer space. Negotiators simplify and coordinate search via information contained in offer exchanges, isolating subregions of the offer space for potential solutions. We suggest that early search is more exploratory and primarily influenced by the value of offers; later search is more focused on refinement and is influenced by the content of offers. In that, search by value is substantially more difficult than search by content, and parties seek value through communicating about content. Important information about the negotiators’ perspectives is revealed in comprehensive offers, and critical insight into this search process can be gained by examining the pattern of comprehensive offers.
CASA, WASA, and the Dimensions of USComputers in Human Behavior
P Karr-Wisniewski, M Prietula
Do people treat websites as if they were human social actors? Evidence has repeatedly shown that when interacting with computers, people treat computers with typical human social norms – computers are social actors (CASA). We conducted the first research demonstrating that websites are also social actors (WASA) and that WASA dominates CASA. We retest the CASA paradigm and find that our new hypothesis – Websites are Social Actors (WASA) reduces the CASA effect in contexts where individuals form a social attachment to websites instead of computers. If individuals were at the same computer, the most polite (same website) and least polite (other website) scores were obtained. Our exploratory factor analysis generated the same results from a reduced Politeness scale of their original 14 items, but also generated two specific underlying constructs highly related to emerging research describing how humans automatically engage in social evaluation: Helpful (Competence) and Enjoyable (Warmth). We find evidence that suggests humans can exhibit politeness toward websites and literally (not virtually) treat them as social actors. The results are consistent with research in [human] social cognition and suggest that the politeness construct may be tapping similar and fundamental components of how humans engage with others in their social world – the enduring two dimensions of warmth and competence – that we see emerging in AI-human engagements today.
The evolution of metanorms: quis custodiet ipsos custodes?Computational and Mathematical Organization Theory
M. Prietula, D Conway
How are norms maintained? Axelrod (in Am. Political Sci. Rev. 80(4):1095–1111, 1986) used an evolutionary computational model to proffer a solution: the metanorm (norm to enforce norm enforcement). Although often discussed, this model has neither been sufficiently replicated nor explored. In this paper we replicate and extend that agent-based model. Results were generally supportive of the original. Speculations in the original regarding the requirement to link sanctions underlying the metanorm structure were not supported, as differentiating punishment likelihoods against defectors from punishment likelihoods against shirkers (non-enforcers of the norm against defection) lead to more efficient and effective sanctioning structures that allowed norm emergence. Replications of the Groups game (two groups differing in numbers and power) generally supported the original reports, but true norms against defection emerged only if sanctioning structures were differentiated, resulting in the Strong group developing a dominant norm against others defecting (Metavengeance). That is, when groups are involved with differential power, Metanorms fail unless a more sophisticated sanctioning structure (Metavengeance) is supported.
Situational Uses of Syndromic SurveillanceBiosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science
J Buehler, E Whitney, D Smith, M Prietula, S Stanton, A Isakov
We conducted case studies of selected events with actual or potential public health impacts to determine whether and how health departments and hospitals used automatic systems to promptly identify public healthcare threats. We interviewed public health and hospital representatives and applied qualitative analysis methods to identify response themes. So-called ‘‘syndromic’’ surveillance methods were most useful in situations with widespread health effects, such as respiratory illness associated with seasonal influenza, exposures to smoke from wildfires, or potential pathogens in air samples. Typically, these data supplemented information from traditional sources to provide a timelier or fuller mosaic of community health status, and use was shaped by long-standing contacts between health department and hospital staffs. State or local epidemiologists generally preferred syndromic systems they had developed over the CDC BioSense system, citing lesser familiarity with BioSense and less engagement in its development. Instances when BioSense data were most useful to state officials occurred when analyses and reports were provided by CDC staff. Understanding the uses of surveillance information during such events can inform further investments in surveillance capacity in public health emergency preparedness programs. This project was supported by the National Center for Public Health Informatics of the Centers for Disease Control and Prevention under the BioSense Utility cooperative agreement.
In “the zone” The role of evolving skill and transitional workload on motivation and realized performance in operational tasksInternational Journal of Operations & Production Management
E Bendoly, M Prietula
This is believed to be the first study that has explicitly studied the inverted-U dynamics stemming from the interplay of both skill and workload on motivation and performance, over a multi-period framework of analysis. The purpose of this paper is to examine how training specific to a given operational task, and subsequent experiential learning, can heighten skill and hence shift the level of workload at which individuals are most productively motivated. To analyze these effects, a laboratory experiment was used involving a vehicle routing application and 156 managers exposed to a 2 x 3 complete treatment design. Both multi-period objective in-task data and subjective self reports are collected to tap into skill levels, actions and behavioral variables of interest. In the absence of additional workload challenges, the paper finds that increases in skill may in fact significantly limit and in some cases actually degrade overall motivation, as well as objective performance. The implications of the skill-challenge-motivation dynamics observed have direct repercussions for existing management models in which training and experience are viewed as having strictly monotonic benefits to performance. The implications also go far to promote more informed models of worker behavior in operations modeling that otherwise view performance as static or monotonically increasing based on experience.
When behavior matters: Games and computation in A Behavioral Theory of the FirmJournal of Economic Behavior and Organization
M Prietula, H Watson
A Behavioral Theory of the Firm presents a computational model of a duopoly that is based on observations of firm behavior and that incorporates a range of behavioral constructs. Because this model is starkly different from the traditional game-theoretic analysis of duopoly, it useful to compare the performance of a game-theoretic version of this model, shorn of all behavioral constructs, with the original Cyert and March paradigm. To do this we calibrate the game-theoretic model with all the economic components of the computational model, and we assume that firms could choose either cooperative or non-cooperative strategies. We find that the pricing strategy of the computational firms is similar to that found in a non-cooperative game-theoretic outcome and that the advertising choice of the computational firms is less than what non-cooperative or cooperative game-theoretic behavior would predict. We also consider how initializing the choices of the computational firm with those of the game-theoretic firms affects their performance. Informing the computational firms in this way led to greater changes in advertising than pricing strategies; profits of the computational firms were greatest when their initial choices were those found in a non-cooperative equilibrium.
The Making of an ExpertHarvard Business Review
K Ericsson, M Prietula, M, E Cokely
Popular lore tells us that genius is born, not made. Scientific research, on the other hand, reveals that true expertise is mainly the product of years of intense practice and dedicated coaching. We studied data on the behavior of experts gathered by more than 100 scientists. Ordinary practice is not enough: To reach elite levels of performance, you need to constantly push yourself beyond your abilities and comfort level. Such discipline is the key to becoming an expert in all domains, including management and leadership. What consistently distinguished elite surgeons, chess players, writers, athletes, pianists, and other experts was the habit of engaging in "deliberate" practice--a sustained focus on tasks that they couldn't do before. Experts continually analyzed what they did wrong, adjusted their techniques, and worked arduously to correct their errors. Even such traits as charisma can be developed using this technique. For example, the authors describe such an approach applying specific techniques of drama enhanced executives' powers of presence and persuasion. Through deliberate practice, leaders can improve their ability to win over their employees, their peers, or their board of directors. The journey to elite performance is not for the impatient or the faint of heart. For some types of eliteness, it takes at least a decade and requires the guidance of an expert teacher to provide tough, often painful feedback. It also demands would-be experts to develop their "inner coach" and eventually drive their own progress. However, such methods engaged to acquire expertise can (and should be) applied to skill improvement on most any level. This HBR paper has estimated views now exceeding 600,000. This research was funding in part by Goizueta's Summer Research Fund.
Historical Roots of the A Behavioral Theory of the Firm Model at GSIA.Organization Science
M Augier, M Prietula
Richard Cyert and James March’s (1963) A Behavioral Theory of the Firm (ABTOF) is one of the most influential works in organization science. An important element of that work was a computational model of a duopoly, which was arguably the first computational model that instantiated organizational constructs within a substantial theoretical framework. We suggest that the academic environment within which this theory and model grew was instrumental in its emergence. Furthermore, an examination of the model itself (by triangulating on the verbal descriptions, the flow charts, and the code) reveals innovative embodiments of organizational attention, organizational learning, organizational memory, routines, metaroutines, aspiration level adjustments, and computational experiments. In this paper, we examine the historical roots of the model—the concepts, culture, and characters at Carnegie Tech and the Graduate School of Industrial Administration (GSIA). Although causality is difficult to assess historically, we suggest the significance of a strong research-based, interdisciplinary culture at a time when innovative (and often computational) concepts and theories were emerging within the contexts of computer science, economics, and psychology. A shorter version of this paper won the John F. Mee Award for Management History from the Academy of Management. This project was funded in part by the Carnegie Bosch institute of Carnegie Mellon University.
Factors Influencing Analysis of Complex Cognitive Tasks: A Framework and Example from Industrial Process ControlHuman Factors
M Prietula, P Feltovich, F Marchak
We propose that considering four categories of task factors can facilitate knowledge elicitation efforts in the analysis of complex cognitive tasks: materials, strategies, knowledge characteristics, and goals. A study was conducted to examine the effects of altering aspects of two of these task categories on problem-solving behavior across skill levels: materials (static, dynamic) and goal specifications (present, absent). Two versions of an applied engineering problem with feedback were presented to expert, intermediate, and novice participants. One version was a dynamic simulation modeling changes in real-time, and the other was a static image of the system structure with no dynamics. The experts performed better across material conditions and goals increased their performance more than other groups. Static groups generated richer protocols. We conclude that demonstrating differences in performance in this task requires different materials than explicating underlying knowledge that leads to performance. We also conclude that substantial knowledge is required to exploit the information yielded by the dynamic form of the task or the explicit solution goal. This simple model can help to identify the contextual factors that influence the elicitation and specification of knowledge, which is essential in the engineering of joint cognitive systems.
Getting to Best: Efficiency versus Optimality in NegotiationCognitive Science
E Hyder, M Prietula, L Weingart
Negotiation between two individuals is a common task that typically involves two goals: maximize individual outcomes and obtain an agreement. However, research on the simplest negotiation tasks demonstrates that although naive subjects can be induced to improve their performance, they are often no more likely to achieve fully optimal solutions. The present study tested the prediction that a decrease in a particular type of argumentative behavior, substantiation, would result in an increase in optimal agreements. As substantiation behaviors depend primarily on supplied content of the negotiation task, it was also predicted that substantiation behavior would be reduced by curtailing the content. A 2x3x2 experimental design was employed, where both negotiation tactics (list of tactics present versus absent) and negotiation task content (high versus low) were varied to determine the processes leading beyond solution improvement to solution optimality. Sixty-one dyads engaged in a two-party, four-issue negotiation task. All negotiations were videotaped and analyzed. Although the list of negotiation tactics resulted in improved performance, only the content manipulation resulted in a significant increase in dyads achieving optimal solutions. Analyses of the coded protocols indicated that the key difference in achieving optimality was a reduction in persistent substantiation-related operators (substantiation, along with single-issue preferences and procedures) and an increase in a complex macro-operator, multi-issue offers hat reduced the problem space, facilitating the search for optimality.
Extending the Cyert-March Duopoly Model: Organizational and Economic InsightsOrganization Science
M Prietula, H Watson
Two studies were conducted to further explore the organizational and economic insights provided by the Cyert-March duopoly model (C-M) described in A Behavioral Theory of the Firm (ABTOF, Cyert and March 1963). Study 1 examined the extent to which two firms that differed solely in the decision behavior of a routine would also differ in organizational (quasiresolution of conflict, uncertainty avoidance, problem-driven search, and organizational learning) and economic (profit, price, market share, cost) measures and impacts. Three such manipulations were separately examined, where each manipulation altered the relative propensity of the firm to be more or less reactive to three events: production growth pressure, price adjustment under organizational goal conflict, and initial price adjustment under profit goal failure. Analysis of the behaviors revealed how subtle changes in these key routines could generate complex organizational and economic effects that impact firm success. The dominant result demonstrated that organizations that did not perform well organizationally did not perform well economically. In addition, higher reactivity resulted in higher costs, in part because of quantified organizational constructs (e.g., slack, pressure). Study 2 examined the data from the first study to determine the extent to which the performance of the duopoly was consistent with a set of eight stylized economic facts drawn from the empirical literature on oligopoly. Study 2 demonstrated consistency with five of the eight stylized economic facts and provided partial support for two others. We conclude that behavioral constructs of ABTOF, as articulated in the Cyert-March duopoly model, provide behavior and performance capabilities that bridge the gap between game-theoretic models of duopoly and the computational and informational realities of organizations.
Exploring the effects of agent trust and benevolence in a simulated organizational taskApplied Artificial Intelligence
M Prietula, K Carley
Executives argue intuitively that trust is critical to effective organizational performance. Although articulated as a cognitive/ affective property of individuals, the collective effect of events influencing (and being influenced by) trust judgments must certainly impact organizational behavior. To begin to explore this, we conducted a simulation study of trust and organizational performance. Specifically, we defined a set of computational AI agents, each with a trust function capable of evaluating the quality of advice from the other agents and rendering judgments on the trustworthiness of the communicating agent. As AI agent judgments impact subsequent choices to accept or to generate communications, organizational performance is influenced. We manipulated two AI agent properties (trustworthiness, benevolence), two organizational variables (group size, group homogeneity/liar-to-honest ratio), and one environmental variable (stable, unstable). Results indicate that in homogeneous groups, honest groups did better than groups of liars, but under environmental instability, benevolent groups did worse. Under all conditions for heterogeneous groups, it only took one to three liars to degrade organizational performance.
Knowledge and the sequential processes of negotiation: A Markov chain analysis of Response-in-KindExperimental Social Psychology
L Weingart, M Prietula, E Hyder, C Genovese
The impact of tactical knowledge on integrative and distributive response-in-kind behavior sequences and the ability to shift from distributive to integrative behaviors were examined using data from a prior study. Ninety dyads engaged in a multi-issue joint venture negotiation. Forty-five dyads were provided tactical knowledge, and the other 45 were not. Markov chain analysis was used to test the hypotheses. A second-order chain best fit the data. Results showed that negotiators responded-in-kind to both distributive and integrative tactical behavior regardless of tactical knowledge. In line with Weick’s (1969) ‘‘double interact’’ proposition of interlocked behaviors, negotiators with tactical knowledge were more likely to respond-in-kind to integrative behavior than were those without such knowledge, but only after their previous integrative behavior had been reciprocated. In addition, negotiators with tactical knowledge engaged in longer chains of integrative behavior (regardless of the behavior of the other party) than did negotiators without tactical knowledge; however, this only occurred after two integrative behaviors had occurred previously.
Design versus cognition: The interaction of agent cognition and organizational design on organizational performanceJournal of Artificial Societies and Social Simulation
K Carley, M Prietula, Z Lin
The performance of organizations with different structures are examined using agent-based computational simulation models, experimental data, and archival data focused on the relation between the way in which the organization is coordinated and its performance. These variations enable the exploration of the role of agent capabilities, and the way in which agent capability and coordination interact to effect performance. Both micro and macro organizational behavior are examined. Results suggest that simpler models of agents are needed at macro levels and more detailed, more cognitively accurate models are needed at micro or small group levels, to generate the same predictive accuracy.
When processes learn: Steps toward crafting an intelligent organizationInformation Systems Research
D Zhu, M Prietula, WL Hsu
This is the first research that engaged a general AI cognitive architecture capable of learning in an organizational task that also demonstrates how a collection of agents could share their knowledge of that task. In particular, we present the crafting of an organizational process which can learn, and develop and apply a new set of organizational learning metrics to that process. The process is a simplification of a complex, parallel-machine production scheduling task performed in a local manufacturing firm. The system, Dispatcher-Soar, generally supports a symbolic, constraint propagation approach based, in part, on the reasoning methods of the human scheduler at the firm. The implementation of this process is based on a dispatching rule used by the expert. The behavior of Dispatcher-Soar centered around a small case study examining the effects of scheduling volume and learning on performance. Results indicated that the knowledge gained can reduce within-trial scheduling effort. An analysis of the generated knowledge structures (chunks) provided insight into how that learning was accomplished and contributed to process improvements. As the knowledge generated was in a form standardized to a common architecture, metics were used to evaluate the production efficiency (ηprod), utility (ηutil) and effectiveness (ηeff) of the accumulated organizational knowledge across trials.
Knowledge matters: The effect of tactical descriptions on negotiation behavior and outcomeJournal of Personality and Social Psychology
L Weingart, E Hyder, M Prietula
The impact of tactical knowledge on negotiator behaviors and joint outcomes was examined. It was hypothesized that the availability of written descriptions of negotiation tactics would provide negotiators with the knowledge necessary to apply in a mixed-motive negotiation and that, as a result, these negotiators would engage in different behaviors leading to higher joint outcomes than would negotiators without this knowledge. Ninety dyads engaged in a multi-issue joint venture negotiation: 45 dyads were provided tactical descriptions, and the other 45 were not. Dyads with tactical knowledge engaged in more integrative behaviors and achieved higher joint outcomes, with integrative behaviors serving as mediators of the knowledge-outcome effect. Distributive behaviors were found to be negatively related to joint outcome but were not influenced by tactical knowledge.
Software-effort estimation with a case-based reasonerJournal of Experimental and Theoretical Artificial Intelligence
M Prietula, S Vicinanza, T Mukhopadhyay
Software effort estimation is an important but difficult task. Existing algorithmic models often fail to predict effort accurately and consistently. To address this, we developed a computational AI approach to software effort estimation. cEstor is a case-based reasoning engine developed from an analysis of expert reasoning. cEstor's architecture explicitly separates case-independent productivity adaptation knowledge (rules) from case-specific representations of prior projects encountered (cases). Using new data from actual projects, uncalibrated cEstor generated estimates which compare favorably to those of the referent expert, calibrated Function Points and calibrated COCOMO. The estimates were better than those produced by uncalibrated Basic COCOMO and Intermediate COCOMO. The roles of specific knowledge components in cEstor (cases, adaptation rules, and retrieval heuristics) were also examined. The results indicate that case-independent productivity adaptation rules affect the consistency of estimates and appropriate case selection affects the accuracy of estimates, but the combination of an adaptation rule set and unrestricted case base can yield the best estimates. Retrieval heuristics based on source lines of code and a Function Count heuristic based on summing over differences in parameter values were found to be equivalent in accuracy and consistency, and both performed better than a heuristic based on Function Count totals.
Computational organization theory: Autonomous agents and emergent behaviorJournal of Organizational Computing
M Prietula, K Carley
A computational organization theory is the articulation of an organization theory in the form of a computer program. We describe an example of this approach to studying organizational phenomena through the use of simulated artificial intelligent agents, present a detailed description of such a model, and demonstrate the application through a series of experiments conducted with the model. The model, called Plural‐Soar, represents a partial instantiation of a cognitively motivated theory that views organizational behavior as emergent behavior from the collective interaction of intelligent agents over time, and that causal interpretations of certain organizational phenomena must be based on theoretically sufficient models of individual deliberation. We examine the individual and collective behavior of the agents under varying conditions of agent capabilities defined by their communication and memory properties. Thirty separate simulations with homogeneous agent groups were run varying agent type, group size, and number of items in the order list an agent acquires. The goal of the simulation experiment was to examine how fundamental properties of individual coordination (communication and memory) affected individual and group productivity and coordination efforts under different task properties (group size and order size). The specific results indicate that the length of the item list enhances performance for one to three agent groups, but with larger groups memory effects dominate. Communication capabilities led to an increase in idle time and undesirable collective behavior. The general conclusion is that there are subtle and complex interactions between agent capabilities and task properties that can restrict the generality of the results, and that computational modeling can provide insight into those interactions.
Applying an Architecture for General Intelligence to Reduce Scheduling EffortORSA Journal on Computing
M Prietula, W-L Hsu, D Steier, A Newell
Merle-Soar is an AI architecture for general intelligence and learning (Soar) to demonstrate how scheduling effort can be reduced when solving scheduling problems. In particular, we describe how Merle-Soar schedules sequences of jobs on a single bottleneck machine in a Job shop. The knowledge of dispatching, acquired from examining how a human expert performs the task, is cast as search rules. A study examined the contribution of learning within tasks by the change in reasoning effort as knowledge is accumulated from successive trials. The results indicated that dramatic reductions in scheduling effort (in terms of the architecture) were obtained. Knowledge gained early in the scheduling task was subsequently applied later in the task to reduce deliberation, and knowledge gained from one trial successfully reduced deliberation effort in subsequent trials. Additionally, the reduction exhibited the general power law of learning documented in psychological studies of skill acquisition. This work was supported, in part, by a Faculty Development Grant from Carnegie Mellon University, by the Engineering Design Research Center at Carnegie Mellon University, the Center for the Management of Technology at GSIA/CMU, and by the Defense Advanced Research Projects Agency (DARPA).
Organizational Simulation and Information Systems Design: An Operations Level ExampleManagement Science
A Kumar, P Ow, M Prietula
The interplay between organizational structure, the decisions made by agents within the structure, and the technology supporting those agents is an important and complex, but not well understood, phenomenon in modern organizational studies. In this paper we describe how simulating key aspects of an organization's structure, in this case a hospital, can yield insights into the design of information systems and their performance. In particular, we report on a project that simulates alternative distributed decision-making approaches for patient scheduling tasks. The results indicate that there are important and complicated interactions between the alternative organizational structures simulated, the form of the information systems supporting those structures, and the task environment. This suggests that current, universal, a priori assumptions about the interplay between technology and organizational structure are questionable. Furthermore, organization-specific simulation is seen as a potentially useful method of explicating the important tradeoffs in alternative design possibilities.
A mixed-initiative scheduling workbench: Integrating AI, OR and HCIDecision Support Systems
W Hsu, M Prietula, G Thompson, P Ow
In this paper we describe a decision support system for scheduling called MacMerl. This system weaves together numeric and symbolic AI computing techniques to form a ‘scheduler's workbench’. MacMerl has two major components. The first is a Scheduling Kernel which includes a Generative Scheduler, a Constraint Checker, and a Reactive Scheduler. The second is a Manual Scheduler which permits the human to create or modify schedules and includes a Critiquer as well as access to routines in the Scheduling Kernel. Taken together, these components support an approach to problem solving we call mixed-initiative scheduling in which the human and the machine interact in a coherent and cooperative manner to solve complex production scheduling problems
Examining the Feasibility of a Case-Based Reasoning Model for Software Effort EstimationMIS Quarterly
S Vicinanza, T Mukhopadhyay, M Prietula
Existing algorithmic models fail to produce accurate software development effort estimates. To address this problem, a production+case-based AI reasoning model, called Estor, was developed based on the verbal protocols of a human expert solving a set of estimation problems. Estor was then presented with 15 software effort estimation tasks. The estimates of Estor were compared to those of the expert as well as those of the function point and COCOMO estimations of the projects. The estimates generated by the human expert and Estor were more accurate and consistent than those of the function point and COCOMO methods. In fact, Estor was nearly as accurate and consistent as the expert. These results suggest that a case-based reasoning approach for software effort estimation holds promise and merits additional research. Also, this was the first AI model that linked two different AI architectures together.
A protocol-based coding scheme for the analysis of medical reasoningInternational Journal of Man-Machine Studies
F Hassebrock, M Prietula
One of the most common methods of codifying and interpreting human knowledge is through the use of verbal protocol analysis. Although the application of this methodology has increased in recent years, few detailed examples are readily available in the literature. This paper discusses the theoretical issues and methodological procedures pertaining to the analysis of verbal protocols collected from physicians engaged in medical problem solving. We first present a brief historical perspective on verbal protocol methodology. We then discuss how we have come to view the task of medical diagnosis both in general and in particular with respect to a specific specialty—congenital heart disease. Next, we describe and provide examples of our methodology for coding verbal protocols of physicians into abstract, but meaningful objects which are elements of a theory of diagnostic reasoning. In particular, we demonstrate how the coding scheme can represent an important aspect of medical problem solving behavior called a line of reasoning. We conclude by proposing how such analysis is important to understanding the psychology of medical problem solving and how this type of analysis plays an important role in the development of medical artificial intelligence systems and educational efforts directed toward the development of expertise in medical problem solving.
Form and substance in physical database design: An empirical studyInformation Systems Research
M Prietula, S March
As with many complex design problems, physical database design is difficult, ill-structured, and highly human intensive. In order to effectively construct support systems or improve the practice of database design, it is important to understand how human designers reason about the task. We report an empirical study of physical database design problem solving. Thirteen subjects each solved two physical database design problems. For each problem, subjects were presented with a list of available problem information (hardware, content, and activity data) and were directed to generate a physical design (record structures and access paths) that would minimize retrieval time and storage space. All sessions were audiotaped. Three types of data were incorporated for the analysis: information acquisition patterns, solution generation patterns, and verbal protocol. It was hypothesized that database design reasoning embodies forms of deliberation to reduce problem-solving complexity and that these forms resemble those found in other design problem-solving studies—commonality of task environmental demands will result in commonality in problem-solving methods in response to those demands. In particular, we expected to find specific control strategies, the use of hierarchical abstraction, the use of problem-specific heuristics, and the use of qualitative reasoning with mental models of dynamic components of the task. Our results indicate that these forms are indeed present and of significant value in physical database design problem solving. Experience played a significant role in determining both the form and substance of reasoning used in physical database design. Both experienced and inexperienced database designers exhibited at least some of these forms of reasoning. Experienced designers, however, effectively applied these forms, demonstrating a substance of reasoning, although their methods of application varied considerably. The least experienced designers did not effectively apply these forms and, lost in the detail of the design problems, were unable to generate reasonable designs. It is concluded that recognition of appropriate reasoning forms and the effective application of these forms are critical to developing efficient physical database designs. The implications of the findings are discussed.
Software effort estimation: An exploratory study of expert performanceInformation Systems Research
S Vicinanza, S Mukhopadhyay, M Prietula
An exploratory study was conducted (a) to examine whether experienced software managers could generate accurate estimates of effort required for proposed software projects and (b) to document the strategies they bring to bear in their estimations. Five experienced software project managers served as expert subjects for the study. Each manager was first asked to sort a set of 37 commonly-used estimation parameters according to the importance of their effect on effort estimation. Once this task was completed, the manager was then presented with data from ten actual software projects, one at a time, and asked to estimate the effort (in worker-months) required to complete the projects. The project sizes ranged from 39,000 to 450,000 lines of code and varied from 23 to 1,107 worker-months to complete. All managers were tested individually. The results were compared to those of two popular analytical models-Function Points and COCOMO. Results show that the managers made more accurate estimates than the uncalibrated analytical models. Additionally, a process-tracing analysis revealed that the managers used two dissimilar types of strategies to solve the estimation problems—algorithmic and analogical. Four managers invoked algorithmic strategies, which relied on the selection of a base productivity rate as an anchor that was further adjusted to compensate for productivity factors impacting the project. The fifth manager invoked analogical strategies, which did not rely on a base productivity rate as an anchor, but centered around the analysis of the Function Point data to assist in retrieving information regarding a similar, previously-managed project. The manager using the latter, analogical reasoning approach produced the most accurate estimates.
The Experts in your MidstHarvard Business Review
M Prietula, J Simon
This is an early paper I did with Herb Simon based on AI models of expertise I was examining. One was a project I did in Mark Fox's robotic lab, where we analyzed how expert scheduler's performed their task and built an intelligent assistant based on their methods (ergo the "psychology of pscheduling" reference). Another was refining the concept of "intuition" that was based on a model expertise working with Allen Newell based on the Soar AI computational architecture applied to scheduling expertise. This research was funding in part by Goizueta's Summer Research Fund.
Expertise and error in diagnostic reasoningCognitive Science
P Johnson, F Duran, F Hassebrock, P Moller, M Prietula, P Feltovich, D Swanson
An investigation is presented in which an AI simulation model (DIAGNOSER) is used to develop and test predictions for behavior of subjects in a task of medical diagnosis. The first experiment employed a process-tracing methodology in order to compare hypothesis generation and evaluation behavior of DIAGNOSER with individuals at different levels of expertise (students, trainees, experts). A second experiment performed with only DIAGNOSER identified conditions under which errors in reasoning in the first experiment could be related to interpretation of specific data items. Predictions derived from DIAGNOSER's performance were tested in a third experiment with a new sample of subjects. Data from the three experiments indicated that (1) form of diagnostic reasoning was similar for all subjects trained in medicine and for the simulation model, (2) substance of diagnostic reasoning employed by the simulation model was comparable with that of the more expert subjects, and (3) errors in subjects' reasoning were attributable to deficiencies in disease knowledge and the interpretation of specific patient data cues predicted by the simulation model
[Selected Book Chapter] Studies of Expertise from Psychological Perspectives: Historical Foundations and Recurrent ThemesK Ericsson, R Hoffman, A Kozbelt, M Williams (Eds.), Cambridge Handbook of Expertise and Expert Performance, 2nd Edition. New York, NY: Cambridge University Press.
P Feltovich, M Prietula, A Ericsson
This is an updated chapter from the 1st edition that reviews an influential historical research period roughly from the mid-1950s to the 1980s when empirical laboratory studies of expert reasoning were first combined with theoretical models of human thought processes that could reproduce observable performance. It characterizes some of the enduring insights about mechanisms and aspects of expertise that generalize across domains, reflecting on the original theoretical accounts. There were three primary roots that play an essential role in the field of expertise: artificial intelligence (AI), cognitive psychology, and education. The first AI program, called the logic theorist, was written in the early years. Cognitive Psychology and Computer Science merged into a close collaboration named Cognitive Science. Expert cognition was conceived as the "goal state" for education, the criterion for what the successful educational process should produce, and a measure to assess its progress. This informed both pedagogical design and teacher evaluation. Knowledge is viewed as the primary source of difference associated with expertise, so this influenced the emerence of the "expert systems" apporach to artificial intelligence. This work was supported by a Goizueta summer research grant.
[Selected Book Chapter] The benefits and liabilities of interacting for innovation: A quantitative modelK Pugh (Ed), Smarter Innovation: Using Interactive Processes to Drive Better Business Results. London, UK: Ark Group.
S Levine, T Gorman, M Prietula
We show corporate performance is affected by peer-to-peer sharing, instances when people supplement their knowledge by interacting with others. Popular wisdom holds that such interaction benefits performance unequivocally, but the authors find otherwise. Combining qualitative fieldwork – interviews, observation, and document analysis – with computational modeling, they show that sharing can benefit performance, matter little, or even harm it. The effect of sharing on performance depends on at least three variables (and likely more): the learning capacity of individuals in the organization, the state of organizational memory, and turbulence in the competitive environment. The findings suggest that the effects of interaction on innovation are neither pure nor simple.
[Selected Book Chapter] Computational simulationsD. Teece, M. Augier (Eds.), Palgrave dictionary of strategic management. London, UK: Palgrave MacMillan.
M Prietula, A Kathuria
[Selected Book Chapter] Thoughts on Complexity and Computational ModelsP Allen, S Magruire, B McKelvey (Eds.), The Sage Handbook of Complexity and Management. London, UK: Sage.
What do we mean by ‘complexity’ when we discuss computational models of human organizations? An exact and particular answer to this question may not be straightforward. We see definitions of complexity ranging from informal articulations of ‘generic difficulty’ to the highly-constrained mathematical specification of specific and requisite properties. In Melanie Mitchell’s book, she concludes that ‘neither a single science of complexity nor a single theory of complexity exist yet’ and ‘many different measures of complexity have been proposed; however, none have been universally accepted by scientists’. Is this lack of convergence either essential or important for organizational researchers? Similar differences can be found for ‘complexity’ among (and within) the aforementioned example disciplines. Therefore, to begin adiscussion, it is necessary to provide a sufficient definition or description, whether operationally or otherwise, that accommodates a particular disciplinary context, so that interpretive differences in the use can be accurately discerned. And that is how that chapter begins…
[Selected Book Chapter] Where and When Can Open Source Thrive? Towards a Theory of Robust PerformanceP Agerfalk, C Boldyreff, J Gonzalez-Barahona, G Madey, J Noll (Eds.), Open Source Software: New Horizons. OSS 2010. IFIP Advances in Information and Communication Technology, vol 319. Springer, Berlin, Heidelberg.
While the economic impact of, and the interest in, open source innovation and production has increased dramatically in recent years, there is still no widely accepted theory explaining its performance. We combine original fieldwork with agent-based simulation to propose that the performance of open source is surprisingly robust, even as it happens in seemingly harsh environments with free rider, rival goods, and high demand. Open source can perform well even when cooperators constitute a minority, although their presence reduces variance. Under empirically realistic assumptions about the level of cooperative behavior, open source can survive even increased rivalry and performance can thrive if demand is managed. The plausibility of the propositions is demonstrated through qualitative data and simulation results.
[Selected Book Chapter] Gossip matters: Destabilization of an organization by injecting suspicionA. Kott (Ed.), Information Warfare and Organizational Decision-making. Boston, MA: Artech House.
M Prietula, K Carley
We examine how Internet-based groups can be disrupted through loss of trust via deception. The accuracy of the information flowing in groups is critical to its ability to function. Even modest increases in the error rate generated in one node can induce a profound, far-ranging performance degradation. However, another characteristic of information flow can be at least as critical -- trustworthiness. One universal form of calibrating trust in information exchange is gossip. We explore the theory and evidence of gossip, and how that impacts veiled groups that form to exchange information anonymously. Through agent-based models, we demonstrate that gossip is dysfunctional not because agents are “wasting time” gossiping but its existence can reduce the flow of any information. However, gossip can be functional to “isolate, prune, and tune” by spreading information on the viability of less reliable components. Thus, gossip as an original cultural learning mechanism can be exploited as a form of organizational learning. This research was funded in part through an NSF Computer and Information Sciences, Information and Intelligent Systems Award.
[Selected Book Chapter] Advice, Trust, and Gossip Among Artificial AgentsA. Lomi, E Larsen (Eds.), Dynamics of Organizations: Computational Modeling and Organization Theories. AAAI Press, 2001
In the Foreword to this volume, Jim March presents an insightful excursion into the historical role that computer simulation has played (or, in most cases, has not played) in halls of organizational theory and identifies two general theoretical problems that can be tackled by simulations: ecological (contextual) complexity and historical (temporal) complexity. The former addresses the difficulties of making macro predictions (or explanations) involving certain types of interactions of micro-events (e.g., agent decisions and behaviors). The latter addresses the difficulties in making state predictions (or explanations) that are strongly derivative of prior events (especially if a component of those prior events involved exogenous random components). As the reader will discover, the topic of this chapter concerns an organizational system that exhibits both ecological and historical complexities. As the reader will judge, computational modeling provides a mechanism to embrace those complexities and generate insight into the organizational system of interest. In this chapter, it is argued that the emergent advice coalitions, such as those on the Internet, form a new model of business that incorporates a fundamental human activity to control for sources of bad advice – gossip. In addition, it is argued that gossip and trust interact, but in a manner that insulates trust while altering advice-taking behaviors. How these interact is explored in a series of computer simulations. A series of computer simulations is created to explore the implications of these proposals. The underlying substrate for the arguments concerning gossip, social agent coalitions, and the methodology is given by weaving together several theoretical positions.
[Selected Book Chapter] Boundedly rational and emotional agents: Cooperation, trust, and rumorC. Castelfranchi, Y-H Tan (Eds.), Trust and deception in virtual societies. Springer
M Prietula, K Carley
Computer-based AI agents, in various forms, are becoming actively involved in our personal and professional decisions and deliberations. We interact with them; they interact with each other. In this chapter, we describe a broad Model Social Agent study where we explore how a process model of boundedly-rational agents with emotion behave across increasingly social contexts and the impact of cooperation, trust, rumor, and deception within those contexts Our work focuses on understanding the relationship among humans, agents, tasks, and the social situations in which they are engaged. From this, we establish the elemental basis of social behavior and group phenomena and make predictions about them. Our guide for this effort is the Model Social Agent matrix.
[Selected Book Chapter] WebBots, Trust, and Organizational ScienceM Prietula, K Carley, L Gasser (Eds.), Simulating organizations: Computational models of institutions and groups. Menlo Park, CA: AAAI Press.
K Carley, M Prietula
Web Bots are artificial creatures, by which we mean they are created by us. Yet they are neither biological nor mechanical; they are usually a form of [distributed] computational AI agents. Web Bots take many forms, but their uniqueness is the environments within which they reside – webs of interconnected networks. Initially, they were simplistic and engaged in well-defined and restricted tasks. However, their structure, function, and responsibilities are escalating, operating as distributed intelligent agents working cooperatively to achieve goals. In this chapter, we present an architecture that can realize a specific type of Web Bot that can reason and communicate with other Web Bots. A central point of this chapter is exploring the social aspects of these AI Web Bots. We describe a computational experiment where we assign tasks to AI Web Bot agents, adjusting trust and forgiveness in their information exchanges. We conclude by discussing that it is essential to assimilate this technology into the corporate environment and that a foundation for defining AI Web Bot (organizational) science is now emerging.
[Selected Book Chapter] The Turing Effect: The nature of trust in expert systems adviceP. Feltovich, K Ford, R Hoffman (Eds.), Expertise in context: Human and machine. Menlo Park, CA: AAAI Press.
J Lerch, M Prietula, C Kulik
This is one of the earliest papers examining trust and explanations in AI. Alan Turing's classic 1950 test explored whether a judge could discern if typed exchanges were with a computer or a human. Here we take a slightly different approach. We define and describe the Turing Effect as the differential impact on trust judgments resulting from attributing advice to an AI system. In four experiments, we tell our participants the source of advice (AI, human expert, human novice) for a series of financial problems in simulated email. Participants reacted differently to AI sources of advice. Ratings of agreement (in the advice), confidence (in the source), and performance attributions (Stability: stable, unstable x Locus of Control: internal, external) were collected to assess associated underlying causality. Participants were less confident in AI than experts and rated effort as contributing less than the humans (i.e., AI cannot exert “effort”), but agreed more with the advice of the AI than with expert advice. Specific attributions of the source can exploit differences in trust. Explanations significantly affected agreement with the advice but did not affect confidence in the source. These results contradict claims made by designers that AI explanations can increase trust (i.e., dependability).
[Selected Book Chapter] ACTS theory: Extending the model of bounded rationalityK Carley, M Prietula (Eds.), Computational Organization Theory. Hillsdale, NJ: Erlbaum.
K Carley, M Prietula
Bounded rationality asserts that AI agents may be rational in intent, but less than rational in execution because of fundamental limits of cognition. This was to replace the model of agents upon which theories of economics and organizations were based. In this chapter we extend the original model of bounded rationality and incorporate a general process theory of organizations -- ACTS theory. In this theory, we view organizations as collections of artificially intelligent agents who are cognitively restricted, task oriented, and socially situated. We present the arguments for the approach and articulate the assumptions axiomatically.
[Selected Book Chapter] A computational model of musical creativityP Rosenbloom, J Laird, A Newell (Eds.), The Soar Papers: Research on Integrated Intelligence. Vol. 2. Cambridge, MA: MIT Press
S Vicinanza, M Prietula
Here we present an AI model of creativity in music, which generates unique melodies. Creativity has long been viewed as a form of problem-solving. Similar assertions have been put forth for related, if not equivalent, phenomena such as insight, intuition, and scientific discovery. The primary tenet of these theories is that all cognitive behavior can be described by general mechanisms of problem representation and learning. Musical creativity, as a form of human cognition, can then be described in these terms. In this chapter, we investigate how a crucial component of the musical creative process – melody generation – may be simulated through software that embodies a view of musical creativity as a heuristic search through multiple problem spaces. This is interesting in that prior cognitive models of music describe this as an “unconscious process” or a “creative impulse.” As there is no clear consensus as to what cognitively “represents” music, we articulate our definition in terms of the basic general problem-solving machinery of the Soar architecture. We demonstrate how a plausible model of melody within this architecture (having direct knowledge of scales, chords, rhythmic patterns, and a few heuristics of how to go about selecting a pitch for a note) can generate many different melodies from the same knowledge base. Furthermore, by using a simple set of preferences selected from Bach, Melody-Soar generates surprisingly complex tunes that are interpreted as Bach-ish by others.
In the News (6)
Acting workshop for business leaders to be offered at Goizueta
The module is designed and offered by the developers of the original CMU seven-week course: Professor Geoffrey Hitch of CMU and Goizueta Business School’s Michael Prietula.
Hurricane Matthew interesting case for preparedness, leadership
Michael Prietula, a Professor of Information Systems and Operations Management at Goizueta Business School, is working in collaboration with Emory’s Office of Critical Event Preparedness and Response, the University of Notre Dame and the Miami-Dade Office of Emergency Management. The study has produced a “virtual operations center” modeling tool to investigate how public and private agencies exchange information and make decisions during times of crisis.
Professor receives funding to study disaster response
Thanks to a grant from the National Science Foundation, a Goizueta Business School professor is conducting research to understand how communities plan for, and respond to, emergency events. Michael Prietula, a Professor of Information Systems and Operations Management at Goizueta, is working in collaboration with Emory’s Office of Critical Event Preparedness and Response, two professors from the University of Notre Dame and the Miami-Dade Office of Emergency Management.
A novel look at how stories may change the brain
Emory News Center
His co-authors included Kristina Blaine and Brandon Pye from the Center for Neuropolicy, and Michael Prietula, professor of information systems and operations management at Emory’s Goizueta Business School...
The business, economics and psychology of organized violence and terrorism
Emory News Center
Goizueta Professor of Information Systems and Operations Management Michael Prietula, who studies human decision making and computational modeling of social systems; Distinguished Professor of Neuroeconomics Gregory Berns, whose research focuses on using brain imaging to understand motivation...
Biz students learn about terror fallout
Future executives learn how terrorism causes economic harm at an Atlanta university. CNN's Nick Valencia reports.