Posted on Jan 1, 1

SUBSTITUTING HUMAN DECISION-MAKING WITH MACHINE LEARNING: IMPLICATIONS FOR ORGANIZATIONAL LEARNING

用机器学习替代人类决策:对组织学习的影响

NATARAJAN BALASUBRAMANIAN Syracuse University NATARAJAN BALASUBRAMANIAN 雪城大学

YANG YE Southwestern University of Finance and Economics YANG YE 西南财经大学

MINGTAO XU Tsinghua University 徐明涛 清华大学

The richness of organizational learning relies on the ability of humans to develop diverse patterns of action by actively engaging with their environments and applying substantive rationality. The substitution of human decision-making with machine learning has the potential to alter this richness of organizational learning. Though machine learning is significantly faster and seemingly unconstrained by human cognitive limitations and inflexibility, it is not true sentient learning and relies on formal statistical analysis for decision-making. We propose that the distinct differences between human learning and machine learning risk decreasing the within-organizational diversity in organizational routines and the extent of causal, contextual, and general knowledge associated with routines. We theorize that these changes may affect organizational learning by exacerbating the myopia of learning, and highlight some important contingencies that may mute or amplify the risk of such myopia. 组织学习的丰富性取决于人类通过积极参与环境并运用实质性理性来发展多样化行动模式的能力。用机器学习替代人类决策有可能改变组织学习的这种丰富性。尽管机器学习速度明显更快,且看似不受人类认知局限和僵化性的制约,但它并非真正的有意识学习,决策依赖于正式的统计分析。我们认为,人类学习与机器学习之间的显著差异可能会降低组织常规内的组织多样性,以及与常规相关的因果、情境和一般知识的范围。我们从理论上推测,这些变化可能通过加剧学习的短视性来影响组织学习,并强调一些可能减弱或放大这种短视风险的重要权变因素。

Nearly a century ago, John Dewey ( 1922) highlighted how the environment shapes learning by permitting and constraining the selection and expression of individual habits. As they learn, individuals actively engage with the environment and rely on their substantive rationality, or the capacity for rational action under some value criteria, to develop diverse understandings of the environment (Dewey, 1922; Lindebaum, Vesa, & den Hond, 2020). Within organizations, such diverse understandings among members enable organizations to learn from their experience in rich and diverse ways and better navigate environmental changes (Feldman & Pentland, 2003; Pentland & Feldman, 2005; Zollo & Winter, 2002). 近一个世纪前,约翰·杜威(1922)强调了环境如何通过允许和限制个体习惯的选择与表达来塑造学习。在学习过程中,个体积极与环境互动,并依靠其实质性理性(即在某些价值标准下进行理性行动的能力)来形成对环境的多样化理解(杜威,1922;林德鲍姆、韦萨和登洪德,2020)。在组织内部,成员之间的这种多样化理解使组织能够以丰富多样的方式从经验中学习,并更好地应对环境变化(费尔德曼和彭特兰,2003;彭特兰和费尔德曼,2005;佐洛和温特,2002)。

The rise of machine learning (ML) algorithms, which apply statistical analysis to large volumes of data, can fundamentally alter this richness in organizational learning (OL), the process of individual and collective learning from experience with the intent to improve organizational actions. This is because machines “learn” in profoundly different ways compared with humans (Broussard, 2018; Smith, 2019). While human learning, albeit fallible, is varied, rich in social context, forward-looking, and based on judgment and understanding (Lindebaum et al., 2020, Smith, 2019), ML lacks sentience and relies on formal rationality, or impersonal quantitative calculations, to select a small set of statistical models that best describe the specific context of historical data (Broussard, 2018; Choudhury, Starr, & Agarwal, 2020; Lindebaum et al., 2020). 机器学习(ML)算法的兴起——这类算法对海量数据进行统计分析——能够从根本上改变组织学习(OL)的丰富性,组织学习是个体和集体从经验中学习以改进组织行动的过程。这是因为机器的“学习”方式与人类截然不同(Broussard, 2018;Smith, 2019)。尽管人类学习存在局限性,但它形式多样、社会背景丰富、具有前瞻性,并且基于判断和理解(Lindebaum et al., 2020;Smith, 2019),而机器学习缺乏感知能力,依赖形式化的理性(即非个人化的定量计算)来选择能最准确描述历史数据特定情境的少量统计模型(Broussard, 2018;Choudhury, Starr, & Agarwal, 2020;Lindebaum et al., 2020)。

These differences between human learning and ML are particularly consequential to OL when organizations substitute human decision-making with ML. Unlike when ML complements human decisionmakers and is circumscribed by human judgment (like other tools), as a substitute, ML imposes greater formal rationality on decision-making (Lindebaum et al., 2020). More broadly, such differences combined with ML’s greater autonomy (unlike other software that uses decision rules specified by humans, ML independently infers them based on correlations) have led some scholars to deem the rise of ML to be of “epochal significance” (Smith, 2019: xiii). Others, supported by growing evidence, have raised concerns that these attributes may have detrimental implications, including lack of transparency (Lindebaum et al., 2020), omission of important contextual knowledge (Choudhury et al., 2020), and bias (Whittaker et al. , 2018). In light of these conversations that are relevant to OL, and in the absence of systematic studies of the impact of ML on OL, we build on classic (Dewey, 1922; Galbraith, 1973; Mintzberg, 1994; Simon, 1947) and recent (Choudhury et al., 2020; Lindebaum et al., 2020) work, and compare and contrast human learning with ML to theorize how OL may be affected when human decision-making is substituted with ML. 人类学习与机器学习之间的这些差异,在组织用机器学习替代人类决策时,对组织学习(OL)尤为重要。与机器学习作为补充工具、受人类判断约束(类似其他工具)的情况不同,作为替代工具,机器学习会在决策过程中施加更强的形式理性(Lindebaum等人,2020)。更广泛地说,这些差异与机器学习更强的自主性(不同于其他由人类指定决策规则的软件,机器学习能基于相关性独立推断规则)相结合,促使一些学者认为机器学习的兴起具有“划时代意义”(Smith,2019:xiii)。另有学者基于日益增多的证据,对这些属性可能带来的不利影响表示担忧,包括缺乏透明度(Lindebaum等人,2020)、遗漏重要情境知识(Choudhury等人,2020)以及存在偏见(Whittaker等人,2018)。鉴于这些与组织学习相关的讨论,且目前缺乏关于机器学习对组织学习影响的系统性研究,我们借鉴经典研究(杜威,1922;加尔布雷斯,1973;明茨伯格,1994;西蒙,1947)和近期研究(Choudhury等人,2020;Lindebaum等人,2020),通过比较人类学习与机器学习,理论化分析当人类决策被机器学习替代时,组织学习可能受到的影响。


We conceptualize OL as a combination of individuallevel learning driven by individual motivations, perceptions, and experiences, developed through reflecting on and responding to environmental changes and expressed as habits and heuristics (Dewey, 1922; Mahoney, 1995; Simon, 1947); and organizationallevel learning, developed through coconstitutive interactions among individuals, artifacts, and the environment, and encoded in organizational routines, rituals, customs, politics, and software such as ML (Glaser, 2017; March, 1991; Simon, 1947). We first theorize that substituting human decision-making with ML risks decreasing (a) the “within-organizational diversity in routines,” or the diversity in variants of the same routine within the organization; and (b) the richness of background knowledge, or the extent of contextual and causal knowledge, as well as general knowledge beyond specific contexts, in organizational routines. We then argue that these changes can exacerbate learning myopia, or the tendency to overlook distant times, distant places, and failures (Levinthal & March, 1993). We theorize, and complement with a proof-of-concept simulation, that this happens partly because ML replaces the diversity in routines arising from human learning with a small, homogenous set of variants selected based on conformance with historical data (“selection effect”). This, combined with the lower richness of background knowledge (“nescience effect”) and the consequent reduction in the ability to engage in substantive rationality, makes organizations susceptible to ignoring environmental changes, withinorganizational interdependencies across routines, and extreme outcomes. 我们将组织学习(OL)概念化为两个部分的结合:一是个体层面的学习,由个体动机、认知和经验驱动,通过反思和响应环境变化而形成,并表现为习惯和启发式方法(Dewey, 1922;Mahoney, 1995;Simon, 1947);二是组织层面的学习,通过个体、人工制品和环境之间的相互构成性互动而发展,并编码在组织惯例、仪式、习俗、政治以及机器学习(ML)等软件中(Glaser, 2017;March, 1991;Simon, 1947)。我们首先提出理论:用机器学习替代人类决策可能会降低(a)“组织内部惯例的多样性”,即组织内同一惯例变体的多样性;以及(b)背景知识的丰富性,即组织惯例中情境性和因果性知识的程度,以及超越特定情境的一般知识的程度。随后我们论证,这些变化会加剧学习近视,即忽视遥远时代、遥远地点和失败的倾向(Levinthal & March, 1993)。我们通过概念验证模拟进行理论补充,认为这部分是因为机器学习用基于与历史数据一致性选择的小而同质的变体集,取代了人类学习产生的惯例多样性(“选择效应”)。结合背景知识丰富性的降低(“无知效应”)以及由此导致的实质性理性参与能力的下降,使得组织容易忽视环境变化、组织内惯例间的相互依存关系以及极端结果。

While the first part of our theorization is motivated by the broader concerns highlighted in recent studies, it is not our intent to argue against the use of ML. ML has several benefits, including enormous computational superiority over humans and an ability to draw nonobvious patterns from voluminous data (Choudhury et al., 2020; Smith, 2019). These benefits of ML can enable organizations to better and more quickly assess, learn, and respond to some environmental changes. In this regard, the second part of our theorization lays out important contingencies that affect the trade-off between the benefits of ML and the benefits of greater routine diversity and knowledge richness, and amplify or mute the aforementioned risks of myopia. 虽然我们理论化的第一部分是受近期研究中强调的更广泛关切所驱动,但我们并非旨在反对使用机器学习(ML)。机器学习有若干优势,包括在计算能力上远超人类,以及能够从海量数据中发掘非显而易见的模式(Choudhury等人,2020;Smith,2019)。这些机器学习的优势可使组织更好且更快地评估、学习并响应一些环境变化。在这方面,我们理论化的第二部分阐述了影响机器学习优势与更高常规多样性及知识丰富性优势之间权衡的重要意外情况,并放大或弱化了上述近视风险。

Our broadest contribution is to offer a theoretical framework that helps explain how substituting human decision-making with ML may alter organizational routines and possibly impoverish OL, which in turn has consequences for the quality of organizational decision-making and performance. By doing so, we hope to enable a deeper understanding of the OLrelated risks of ML. This is particularly relevant as ML is a broadly applicable tool that is relevant to many kinds of organizations, including large and small businesses, hospitals, police, and schools (Whittaker et al., 2018), not of all which may have the requisite resources and capabilities to fully understand the risks of substituting human decisionmaking with ML. 我们最广泛的贡献是提供一个理论框架,帮助解释用机器学习替代人类决策可能如何改变组织惯例,并可能削弱组织学习(OL),而这反过来又会影响组织决策质量和绩效。通过这样做,我们希望能让人们更深入地理解与机器学习相关的组织学习风险。这一点尤为重要,因为机器学习是一种应用广泛的工具,适用于许多类型的组织,包括大型和小型企业、医院、警察部门和学校(Whittaker等人,2018),并非所有这些组织都可能具备充分理解用机器学习替代人类决策所带来的风险的必要资源和能力。

We also contribute to the literature on inadequacies of OL (Ahuja & Lampert, 2001; Levinthal & March, 1993; Levitt & March, 1988) by underscoring that ML can intensify and create new sources of learning myopia. Our findings suggest, counterintuitively, that though ML may not face the human cognitive limitations that lead to learning myopia, substituting human decision-making with ML can cause and exacerbate such myopia because the key to addressing that myopia lies not in ML but within the nature of human learning itself. In this regard, our proof-of-concept simulation clearly shows that ML can cause myopia even if human learning was not myopic. 我们还通过强调机器学习(ML)可能加剧并产生新的学习近视来源,为有关组织学习(OL)不足的文献做出贡献(Ahuja & Lampert, 2001; Levinthal & March, 1993; Levitt & March, 1988)。我们的研究结果反直觉地表明,尽管机器学习可能不会面临导致学习近视的人类认知局限,但用机器学习替代人类决策可能会引发并加剧这种近视,因为解决近视的关键不在于机器学习,而在于人类自身学习的本质。在这方面,我们的概念验证模拟清楚地表明,即使人类学习没有近视,机器学习也可能导致近视。

Together, our theorization highlights an important implication—the increased importance of the human element in mitigating OL-related risks of ML, particularly with regard to instituting governance mechanisms to retain the routine diversity and knowledge richness needed to ensure successful OL. Furthermore, our study suggests several novel extensions to research on OL and research on ML in organizations, including investigating how ML may affect knowledge depreciation and spillovers of interorganizational learning, and how organizational goals may moderate the effects of ML. Thus, our study goes beyond an ontological critique of how ML may favor a positivist worldview, and improves our understanding of the practical implications of how ML transforms OL. 综上,我们的理论化研究凸显了一个重要启示——在缓解机器学习(ML)相关的在线(OL)风险时,人的因素的重要性日益增加,特别是在建立治理机制以保留确保成功在线学习所需的常规多样性和知识丰富性方面。此外,我们的研究还为组织中的在线学习研究和机器学习研究提出了若干新颖的拓展方向,包括探究机器学习如何影响知识折旧和跨组织学习的溢出效应,以及组织目标如何调节机器学习的影响。因此,我们的研究超越了对机器学习可能偏向实证主义世界观的本体论批判,深化了对机器学习如何转变在线学习实践意义的理解。


Our exposition proceeds by highlighting the critical differences between human learning and ML, and then elaborating on two risks associated with substituting human decision-making with ML— lower routine diversity and knowledge richness— and how these risk causing learning myopia. We then juxtapose these arguments with the potential benefits of ML to identify contingencies that may moderate our hypothesized effects. We follow this with a proof-of-concept simulation, and conclude by outlining some important implications. Our overall model is presented in Figure 1. 我们的论述通过强调人类学习与机器学习(ML)之间的关键差异展开,随后详细阐述用机器学习替代人类决策所带来的两个风险——常规多样性降低和知识丰富度不足——以及这些风险如何导致学习近视。接着,我们将这些论点与机器学习的潜在益处并列,以确定可能缓解我们假设效应的应急措施。之后,我们进行了概念验证模拟,并最后概述了一些重要意义。我们的整体模型如图1所示。

HUMAN LEARNING VERSUS MACHINE LEARNING IN ORGANIZATIONAL LEARNING

组织学习中的人工学习与机器学习

Individuals learn as they interact with other individuals and organizational artifacts inside the organization, and with their environment, such as other organizations (Baum, Li, & Usher, 2000; Ingram & Baum, 1997; Levitt & March, 1988). During this process, individuals process information (Dewey, 1922; Galbraith, 1973; Simon, 1947) according to formal procedures and techniques. In addition, they use substantive rationality to reflect on the information based on their individual value criteria, and develop their own understanding of their interactions with the environment (Dewey, 1922; Kalberg, 1980; Lindebaum et al., 2020). Together, the information processing combined with substantive rationality enables individuals to learn to improve their actions. 个体在与组织内部的其他个体和组织人工制品以及外部环境(如其他组织)互动的过程中学习(Baum, Li, & Usher, 2000; Ingram & Baum, 1997; Levitt & March, 1988)。在此过程中,个体根据正式的程序和技术处理信息(Dewey, 1922; Galbraith, 1973; Simon, 1947)。此外,他们运用实质性理性,基于个人价值标准反思这些信息,并形成对自身与环境互动的理解(Dewey, 1922; Kalberg, 1980; Lindebaum et al., 2020)。信息处理与实质性理性的结合,共同使个体能够学习并改进自身的行为。

This individual learning then transforms into OL through interactions and communications among the organization’s members and communities (Argote & Miron-Spektor, 2011; Brown & Duguid, 1991; Fiol & Lyles, 1985; Levitt & March, 1988; Simon, 1947). During these interactions, which also involve applications of information processing and substantive rationality, knowledge accumulates in routines that can be retrieved in the future or by other members of the organization (Argote & Miron-Spektor, 2011; Galbraith, 1973; Yi, Knudsen, & Becker, 2016). Knowledge in these routines is stored partly as habits and tacit knowledge in individuals and groups of individuals, and partly in organizational artifacts such as manuals, databases, and software (such as ML) and organizational behavior such as rituals, customs, and politics (Nelson & Winter, 1982; Pentland & Feldman, 2005; Simon, 1947). Thus, the distinct features of individual human learning and artifacts (in our context, ML) influence OL. 这种个人学习随后通过组织成员和社群之间的互动与交流转化为组织学习(OL)(Argote & Miron-Spektor, 2011; Brown & Duguid, 1991; Fiol & Lyles, 1985; Levitt & March, 1988; Simon, 1947)。在这些互动过程中,还涉及信息处理和实质性理性的应用,知识会积累到惯例中,未来可由组织的其他成员检索(Argote & Miron-Spektor, 2011; Galbraith, 1973; Yi, Knudsen, & Becker, 2016)。这些惯例中的知识部分以习惯和个人及群体的隐性知识形式存储,部分以组织人工制品(如手册、数据库和软件,例如机器学习(ML))以及组织行为(如仪式、习俗和政治)的形式存储(Nelson & Winter, 1982; Pentland & Feldman, 2005; Simon, 1947)。因此,个人人类学习和人工制品(在我们的语境中,即机器学习(ML))的独特特征会影响组织学习(OL)。

FIGURE 1 Overview of Model

图1 模型概述

CONTINGENCIES Rate & magnitude of environmental change (Proposition 3a) Complexity of routines (Proposition 3b) Universality of cause—effect relationships (Proposition 3c) Learning dependence among routines (Proposition 3d) RISK OF LEARNING MYOPIA UNDERLYING MECHANISMS Ignoring the long run (Proposition 2a) P1a Risk of lower diversity in human learning and ML Differences between > routine Ignoring organizational interdependencies Risk of lower background (Proposition 2b) P1b knowledge in routine Inability to predict extreme outcomes (Proposition 2c, Proposition 2d)

意外事件

环境变化的速率与幅度(命题 3a) 例程的复杂性(命题 3b) 因果关系的普遍性(命题 3c) 例程间的学习依赖性(命题 3d)

学习近视风险的潜在机制

忽视长期(命题 2a) P1a 人类学习和机器学习中多样性降低的风险 例程间差异 忽视组织相互依存性 背景知识降低的风险(命题 2b) P1b 无法预测极端结果(命题 2c、命题 2d)


Learning in Humans

人类学习

(注:此处原输入仅为标题“Learning in Humans”,按要求直接翻译标题为“人类学习”,无多余内容需处理。)

As they learn, individuals (e.g., employees at a retail store) not only process tangible, codifiable stimuli or inputs (e.g., a customer’s arrival in a store) but also engage similarly with other intangible, noncodifiable stimuli or inputs (e.g., the individual’s perceptions about the customer’s purchase decision). Importantly, how individuals process such information varies across individuals and contexts (Dewey, 1922; Galbraith, 1973; Hildebrand, 2008). Individuals differ in how they process information because they have different value postulates driven by their individual experiences (e.g., is this a customer they know?), creative impulses, motivations (e.g., are they facing a deadline to achieve a sales target?), and perceptions (Dewey, 1922; Lindebaum et al., 2020; Mintzberg, 1994; Simon, 1947). How an individual processes information also changes with the environment as new contexts provide new information, which spurs new interactions between the environment and the individual (Dewey, 1922). Indeed, many individual habits and decision models arise through interactions with the social environment (e.g., from family, other members of the organization, and media) and thus are not fully separable from the environment that influenced their formation (Dewey, 1922; Kerr, 2007; Simon, 1947). In sum, human learning is rich in social context. 在学习过程中,个体(例如,零售店的员工)不仅会处理有形、可编码的刺激或输入(例如,顾客进入商店),还会以类似的方式与其他无形、不可编码的刺激或输入互动(例如,个体对顾客购买决策的看法)。重要的是,个体处理此类信息的方式因个体和情境而异(杜威,1922;加尔布雷斯,1973;希尔德布兰德,2008)。个体处理信息的方式不同,是因为他们有不同的价值假设,这些假设由其个人经历(例如,这是他们认识的顾客吗?)、创造性冲动、动机(例如,他们是否面临实现销售目标的截止日期?)和感知驱动(杜威,1922;林德鲍姆等人,2020;明茨伯格,1994;西蒙,1947)。个体处理信息的方式也会随环境变化而改变,因为新的情境会提供新的信息,从而激发环境与个体之间的新互动(杜威,1922)。事实上,许多个体习惯和决策模型是通过与社会环境(例如,来自家庭、组织其他成员和媒体)的互动形成的,因此无法完全与影响其形成的环境分离(杜威,1922;克尔,2007;西蒙,1947)。总之,人类学习具有丰富的社会背景。

In addition, individuals not only develop knowledge about their current contexts but also accumulate knowledge beyond those contexts (e.g., about potential customers who have never bought anything from the store). Furthermore, individual learning has a forward-looking element. When individuals make decisions, they draw on their conceptualization of the future as inputs into the decision-making process (Lindebaum et al., 2020; Mintzberg, 1987, 1994). In so doing, they also rely on, and actively develop, their knowledge of causality to understand the impact of past actions on future consequences. Finally, humans can also learn from unexpected situations through reflection and improve their decision-making (Lindebaum et al., 2020; Mintzberg, 1994). 此外,个体不仅会积累关于当前情境的知识,还会获取超出这些情境的知识(例如,关于从未在该商店购买过任何商品的潜在客户的知识)。此外,个体学习具有前瞻性。当个体做出决策时,他们会将对未来的概念化作为决策过程的输入(Lindebaum等人,2020;Mintzberg,1987,1994)。在这样做的过程中,他们还依赖并积极发展自己的因果关系知识,以理解过去行为对未来结果的影响。最后,人类还可以通过反思从意外情况中学习,并改进自己的决策(Lindebaum等人,2020;Mintzberg,1994)。

Although, as described above, human learning can be rich and diverse, it also has some limitations. Humans have limited information-processing speed and capacity, which causes human learning to be not only slow but also bounded in scope (Cyert & March, 1963; Galbraith, 1973; March & Simon, 1958; Simon, 1947). Furthermore, because individuals do not always use formal optimization (Kalberg, 1980; 尽管如前所述,人类学习可以丰富多样,但也存在一些局限性。人类的信息处理速度和能力有限,这导致人类学习不仅缓慢,而且范围受限(Cyert & March, 1963; Galbraith, 1973; March & Simon, 1958; Simon, 1947)。此外,由于个体并不总是采用正式的最优化方法(Kalberg, 1980;

Lindebaum et al., 2020), human learning can also be affected by inflexibility to change and other biases that cause different forms of learning myopia (Levinthal & March, 1993). Lindebaum等人,2020),人类学习也会受到僵化思维和其他导致不同形式学习近视的偏见的影响(Levinthal & March,1993)。

“Learning” in Machine Learning

“学习”在机器学习中

Machines “learn” and improve over time by computing “more correct” solutions as additional data become available (Lindebaum et al., 2020). Here, “correctness” is defined based not on judgment and understanding but on predictive accuracy (or other quantitative measures) in the context of historical data. Thus, ML is not true, sentient learning (Broussard, 2018: 89). Rather, ML relies on formal rationality embedded in statistical models and codified input data to find a small set of decision models that best predict an outcome variable, and applies those models to new input data. 机器会随着时间推移通过计算“更正确”的解决方案来“学习”和改进,这些解决方案是在有更多数据可用时产生的(Lindebaum等人,2020)。在这里,“正确性”的定义并非基于判断和理解,而是基于在历史数据背景下的预测准确性(或其他定量指标)。因此,机器学习并非真正的、具有感知能力的学习(Broussard,2018:89)。相反,机器学习依赖于统计模型中嵌入的形式化理性以及经过编码的输入数据,以找到一小套能够最佳预测结果变量的决策模型,并将这些模型应用于新的输入数据。

When implementing ML, organizations often use prepackaged algorithms (Broussard, 2018), which are then put through a time-consuming and costly training process. During this process, alternative statistical models are generated using historical data (e.g., information on past promotional offers to customers) based on decisions previously made by humans or coded by humans for ML training. These models are then tested by humans, primarily with regard to their predictive accuracy in the context of historical data. Models that pass testing are deployed for making decisions (e.g., which customers to make a promotional offer to) with little human intervention. Because of its superior computational capabilities, ML can apply these models to much larger volumes of data (e.g., data on millions of customers) than humans can. These models may be automatically updated, albeit within the limits of their parameters, as new data become available (e.g., did the customers make a purchase?). More extensive updating, including identifying and correcting errors, requires humans to retrain the ML models using a process similar to that described above. Thus, the underlying human knowledge is transferred to ML through the decisions and input variables codified in the historical data and through the judgments of humans involved in the training process (Ash, 2016: 365) when evaluating the predictive accuracy of ML relative to the historical data. 在实施机器学习(ML)时,组织通常会使用预制算法(Broussard, 2018),然后对这些算法进行耗时且昂贵的训练过程。在此过程中,会基于人类先前做出的决策或为ML训练编写的代码,利用历史数据(例如,过去向客户提供促销优惠的信息)生成替代统计模型。这些模型随后由人类进行测试,主要关注它们在历史数据背景下的预测准确性。通过测试的模型会在人类干预极少的情况下部署以做出决策(例如,向哪些客户提供促销优惠)。由于机器学习具有卓越的计算能力,它可以将这些模型应用于比人类处理大得多的数据量(例如,数百万客户的数据)。这些模型可能会在参数限制范围内自动更新(例如,客户是否进行了购买)。更广泛的更新(包括识别和纠正错误)需要人类使用与上述类似的流程重新训练ML模型。因此,底层的人类知识通过历史数据中编码的决策和输入变量,以及在评估ML相对于历史数据的预测准确性时参与训练过程的人类的判断,被转移到机器学习中(Ash, 2016: 365)。

Though humans are involved in the ML training and testing process, as they are during the design of other software, ML differs in some important aspects: first, other software works with categorizations of our world as prescribed by humans (Smith, 尽管机器学习的训练和测试过程中也涉及人类,就像其他软件的设计过程一样,但机器学习在一些重要方面有所不同:首先,其他软件处理的是人类规定的我们世界的分类(Smith,


2019). ML, in contrast, is “not committed to any particular ontological story” (Smith, 2019: 58) and does not need a discrete, object-based, formal ontology. Rather, it autonomously infers the underlying decision rules from historical data based on statistical analysis. Second, these inferred rules remain opaque in that even ML programmers cannot directly observe or manipulate the rules but can only see the final decisions. This is especially true for sophisticated ML algorithms that are the focus of our analysis (e.g., artificial neural networks store models as weights that do not have any correspondence to reallife objects). In contrast, other complex software, such as Enterprise Resource Planning and traditional expert systems (which, like ML, can substitute for human decision-making) use decision rules programmed by humans (Negnevitsky, 2011; Weiss & Kulikowski, 1991) and are a codification of their designers’ knowledge. Hence, the opacity in ML is intensified compared with other complex software, making it harder to locate and resolve any errors. Finally, when substituted for human decisionmaking, ML has a considerable degree of autonomy that other software does not have (Murray, Rhymer, & Sirmon, 2021), which increases the impact of errors made by ML. Errors are noticed only after they have occurred and have possibly affected many decisions, since humans do not filter every decision made by ML (Whittaker et al., 2018). 2019年)。相比之下,机器学习(ML)“不致力于任何特定的本体论解释”(Smith,2019:58),也不需要离散的、基于对象的、形式化的本体论。相反,它通过统计分析从历史数据中自主推断出潜在的决策规则。其次,这些推断出的规则具有不透明性,即使是机器学习程序员也无法直接观察或操纵这些规则,只能看到最终的决策结果。对于我们分析的复杂机器学习算法(例如,人工神经网络将模型存储为权重,这些权重与现实世界中的对象没有任何对应关系)尤其如此。相比之下,其他复杂软件(例如企业资源计划系统和传统专家系统,这些系统与机器学习类似,也可以替代人类进行决策)使用的是人类编程的决策规则(Negnevitsky,2011;Weiss & Kulikowski,1991),并且是其设计者知识的编码。因此,机器学习的不透明性比其他复杂软件更严重,这使得定位和解决任何错误变得更加困难。最后,当机器学习替代人类进行决策时,它具有其他软件所不具备的相当程度的自主性(Murray,Rhymer,& Sirmon,2021),这增加了机器学习错误所带来的影响。错误只有在发生之后才会被发现,并且可能已经影响了许多决策,因为人类不会过滤机器学习做出的每一个决策(Whittaker等人,2018)。

In sum, while ML can augment human effort, it has a unique combination of characteristics that raise concerns for OL, particularly when it substitutes for human decision-making. We discuss this below, beginning with how these differences between human learning and ML can manifest in two key aspects of organizational routines, especially when no concomitant mitigating mechanisms are adopted. (We return to these mechanisms in the Discussion.) 总之,尽管机器学习可以辅助人类工作,但它具有独特的特征组合,这引发了对组织学习(OL)的担忧,尤其是当它替代人类决策时。我们将在下文讨论这一点,首先从人类学习与机器学习之间的差异如何在组织惯例的两个关键方面体现展开,特别是在未采取相应缓解机制的情况下。(我们将在“讨论”部分回到这些机制。)

Within-Organizational-Routine Diversity in Routines

组织内部常规流程中的多样性

An important consequence of the aforementioned nonuniformity in human learning is the generation of variants of the same organizational routine (Feldman & Pentland, 2003; Pentland & Feldman, 2005). This within-organizational-routine diversity (“routine diversity”) can be seen, for example, when different managers within the same function have different variants of the same routine for marketing to customers or for hiring and firing employees. Similarly, different franchisees of a firm may have different variants for making decisions related to customer satisfaction problems. Such diversity, as we elaborate later, plays an important role in addressing myopia in learning. 人类学习中上述非均匀性的一个重要后果是产生了相同组织惯例的变体(Feldman & Pentland, 2003; Pentland & Feldman, 2005)。这种组织内部惯例的多样性(“惯例多样性”)可以在例如同一职能部门内不同经理对客户营销或员工雇佣与解雇有不同惯例变体时看到。同样,一家公司的不同加盟商可能在解决客户满意度问题的决策上有不同变体。正如我们稍后详细阐述的,这种多样性在解决学习中的近视问题方面发挥着重要作用。

Theoretically, such diversity traces back to the essence of human learning regarding its reliance on substantive rationality and the interactions with the environment and changes therein. Humans have not only creative impulses but also a vast range of value postulates that vary along many dimensions (Dewey, 1922; Lindebaum et al., 2020). In addition, as Dewey (1922: 96) observed, “every reaction takes place in a different environment, and its meaning is never twice alike, since the different environment makes a difference in consequences.” These environmental variations, along with differences in how individuals interact with those variations, lead to a diversity of variants within a routine, including, for instance, the use of different models and inputs by different individuals in their decision-making. This can be generalized to interactions among groups of individuals with their diverse experiences (Becker, 2004; Glaser, 2017; Pentland & Hærem, 2015), so that these broader interactions add to the diversity within a routine. 从理论上讲,这种多样性可追溯到人类学习的本质,即人类学习依赖实质性理性,以及与环境及其变化的相互作用。人类不仅有创造冲动,还有大量在多个维度上存在差异的价值假设(杜威,1922;林德鲍姆等人,2020)。此外,正如杜威(1922:96)所观察到的,“每一次反应都发生在不同的环境中,其意义永远不会两次相同,因为不同的环境会影响结果。” 这些环境变化,以及个体与这些变化互动方式的差异,导致了一个常规内的多样性变体,例如,不同个体在决策中使用不同的模型和输入。这可以推广到具有不同经验的个体群体之间的互动(贝克尔,2004;格拉泽,2017;彭特兰 & 海尔姆,2015),因此这些更广泛的互动增加了常规内的多样性。

As related to ML, routine diversity gives rise to a large part of the statistical variation in the historical data. As a stylized example, in a routine to offer a repurchase discount to customers, one part of the organization may choose customers who purchased in the past three months, while another part may pick those who purchased in the past six months, thus creating two different routine variants. (Reallife routines, of course, will vary on more dimensions.) These differences would be reflected in the historical data—for instance, as variations in the length of time since the last purchase. 与机器学习相关的是,常规的多样性会导致历史数据中很大一部分统计变化。举一个典型的例子,在一个向客户提供回购折扣的流程中,组织的一个部门可能会选择过去三个月内购买过的客户,而另一个部门可能会选择过去六个月内购买过的客户,从而产生两种不同的常规变体。(当然,现实生活中的常规在更多维度上会有所不同。)这些差异会反映在历史数据中——例如,表现为上次购买至今时间长度的变化。

In contrast to such natural diversity that arises from the features of human learning discussed earlier, ML uses a small homogeneous set of decision models chosen for conformance to narrow statistical criteria (unlike humans, who have more diverse value criteria). For instance, in the above example, ML may choose an average of three or six months to decide whether a customer should receive an offer. By doing so, ML effectively supplants the diverse routine variants arising from human learning with a small set of homogeneous variants. In this regard, at least in theory, it is possible for organizations to simulate" diversity in ML (e.g., by choosing additional models). However, such models will also be chosen for their fit to historical data, and hence are unlike diversity arising from human learning. Furthermore, although machines can “learn” based on new data from a changing environment (e.g., a movie recommendation algorithm can update small changes in customer preferences), they can do so only within the narrow confines of their data and formal rationality. Thus, even in the face of a generative environment, ML may not be able to augment routine diversity. 与前面讨论的人类学习特征所产生的这种自然多样性不同,机器学习使用一小套为符合狭窄统计标准而选择的同构决策模型(与人类不同,人类有更多样化的价值标准)。例如,在上述示例中,机器学习可能会选择平均三个月或六个月来决定客户是否应该收到报价。通过这样做,机器学习有效地用一小套同构变体取代了源于人类学习的多样化常规变体。在这方面,至少在理论上,组织有可能在机器学习中模拟“多样性”(例如,通过选择额外的模型)。然而,这些模型也将因其对历史数据的拟合度而被选择,因此不同于源于人类学习的多样性。此外,尽管机器可以根据不断变化的环境中的新数据进行“学习”(例如,电影推荐算法可以更新客户偏好的微小变化),但它们只能在其数据和形式理性的狭窄范围内做到这一点。因此,即使在生成性环境中,机器学习也可能无法增加常规多样性。


Such a decline in routine diversity due to ML can be intensified by its greater computational capabilities. In particular, as not all routine variants are likely to be equally aligned with the environment, an important component of the OL process is to identify which variants work and which do not. Indeed, such a process of selection has been highlighted in prior studies of OL (e.g., Anand, Mulotte, & Ren, 2016; Cohen & Bacdayan, 1994; Galbraith, 1973; March, 1991). With humans, this process of selection is slow and involves trial and error (e.g., Galbraith, 1973; Levitt & March, 1988). Indeed, not all learning efforts by humans succeed. Organizations may not be able to identify the best variants, and even if they are able to, human inflexibility to change may prevent their adoption, all of which slow the reduction in routine diversity. In contrast, ML’s ability to rapidly process large volumes of data (Cui, Wong, & Lui, 2006) implies that variants best aligned with the environment (as reflected in the historical data) are selected rapidly. Hence, in our example, ML may quickly pick three months as the criterion to make a discount offer if it finds that, historically, customers who received an offer after three months purchased more often than did those who received an offer after six months. Humans, on the other hand, may not recognize this difference between the two variants, or may have considered other factors when deciding to retain both variants. We summarize these arguments below and return to this aspect in our proof-of-concept simulation. 由于机器学习(ML)的计算能力更强,其导致的常规多样性下降可能会加剧。具体而言,并非所有常规变体都可能与环境同样契合,因此组织学习(OL)过程的一个重要组成部分是识别哪些变体有效,哪些无效。事实上,此类选择过程已在先前的组织学习研究中得到强调(例如,Anand、Mulotte & Ren,2016;Cohen & Bacdayan,1994;Galbraith,1973;March,1991)。对于人类而言,这一选择过程较为缓慢,且涉及试错(例如,Galbraith,1973;Levitt & March,1988)。的确,人类并非所有学习尝试都能成功。组织可能无法识别最佳变体,即便能够识别,人类改变的不灵活性也可能阻碍这些变体的采用,所有这些都会减缓常规多样性的减少。相比之下,机器学习能够快速处理大量数据(Cui、Wong & Lui,2006),这意味着与环境最契合的变体(反映在历史数据中)会被快速选中。因此,在我们的示例中,如果机器学习发现历史上在三个月后收到折扣优惠的客户购买频率高于六个月后收到优惠的客户,它可能会迅速将三个月作为折扣优惠的标准。另一方面,人类可能无法识别这两种变体之间的差异,或者在决定保留这两种变体时可能会考虑其他因素。我们在下文总结这些论点,并在概念验证模拟中回到这一方面。

Proposition 1a. When substituted for human decisionmaking, machine learning risks lowering routine diversity (“selection effect”). 命题1a。当机器学习替代人类决策时,存在降低常规多样性的风险(即“选择效应”)。

Richness of Background Knowledge

背景知识的丰富性

Because of the way machines learn, this can also influence the richness of background knowledge contained in organizational routines and their variants. Substituting human decision-making with ML can alter this richness of knowledge and affect the organization’s ability to engage in substantive rationality. Despite the human involvement during training, the reliance on statistical models means that ML is lacking in social context (Broussard, 2018). ML uses codifiable data, some of which humans use in their decisionmaking (e.g., age, education, and experience of a job applicant), and some of which humans cannot use because they require extensive information-processing capabilities (e.g., evaluating millions of social media posts for a marketing campaign). Nonetheless, these data are limited to measurable factors and exclude any uncodifiable but relevant aspects of the social context and decision-makers, such as their creativity, judgment, and experiences. In support of this argument, Choudhury et al. (2020) found that when a patent application text contains irrelevant information or omits some information, ML misses critical prior art (that is, evidence that the invention is already known). In contrast, individual experts with background knowledge are able to provide keywords that help locate such prior art even if the applicants do not report the keywords (Choudhury et al., 2020). Further, ML’s reliance on statistical analysis also aggravates the context specificity of its decision models due to its tendency toward overfitting, when ML encodes idiosyncratic aspects of the data that explain those data well but are irrelevant or incorrect in other contexts (Choudhury et al., 2020; Whittaker et al., 2018). 由于机器的学习方式,这也可能影响组织惯例及其变体中所包含的背景知识的丰富性。用机器学习替代人类决策可能会改变这种知识的丰富性,并影响组织进行实质性理性决策的能力。尽管在训练过程中存在人类参与,但对统计模型的依赖意味着机器学习缺乏社会背景(Broussard,2018)。机器学习使用可编码的数据,其中一些数据人类在决策中会使用(例如,求职者的年龄、教育程度和工作经验),而另一些数据人类无法使用,因为它们需要大量的信息处理能力(例如,为营销活动评估数百万条社交媒体帖子)。尽管如此,这些数据仅限于可测量的因素,并且排除了社会背景和决策者中任何不可编码但相关的方面,例如他们的创造力、判断力和经验。为支持这一论点,Choudhury等人(2020)发现,当专利申请文本包含无关信息或遗漏某些信息时,机器学习会遗漏关键的现有技术(即证明该发明已为人所知的证据)。相比之下,具有背景知识的个体专家即使在申请人未报告关键词的情况下,也能够提供有助于定位此类现有技术的关键词(Choudhury等人,2020)。此外,机器学习对统计分析的依赖还会因过度拟合的倾向而加剧其决策模型的情境特异性——当机器学习编码数据的特殊方面时,这些方面虽然能很好地解释这些数据,但在其他情境中可能是无关或错误的(Choudhury等人,2020;Whittaker等人,2018)。

Moreover, unlike humans, ML lacks the knowledge of causal relationships among variables, relying solely on past data to make decisions. Thus, ML cannot foresee future consequences as humans can (Lindebaum et al., 2020; Mintzberg, 1987, 1994). Relatedly, part of humans’ background knowledge concerns the social consequences of actions. In this regard, ML is inadequate and does not understand the difference between its own models and the world that those models represent (Smith, 2019). Furthermore, the greater opacity of ML not only increases the possibility of unforeseen errors but also shrinks the richness of knowledge needed for subsequent learning. Together, these arguments imply the following proposition: 此外,与人类不同,机器学习缺乏对变量间因果关系的认知,仅依靠过去的数据来做决策。因此,机器学习无法像人类那样预见未来的后果(Lindebaum等人,2020;Mintzberg,1987,1994)。与此相关的是,人类的部分背景知识涉及行为的社会后果。在这方面,机器学习存在不足,它不理解自身模型与其所代表的世界之间的区别(Smith,2019)。此外,机器学习更大的不透明性不仅增加了不可预见错误的可能性,还缩小了后续学习所需知识的丰富性。综上所述,这些论点暗示了以下命题:

Proposition 1b. When substituted for human decisionmaking, machine learning risks reducing the extent of background knowledge in organizational routines (“nescience effect”). 命题1b。当机器学习替代人类决策时,存在降低组织惯例中背景知识程度的风险(即“无知效应”)。

IMPLICATIONS FOR ORGANIZATIONAL LEARNING

对组织学习的启示

The reduced routine diversity and knowledge richness associated with ML can affect OL by exacerbating existing learning myopia and inducing new sources of myopia. We theorize on this aspect first, and then elaborate on contingencies that may mute or amplify the risk of such myopia. 机器学习相关的常规多样性和知识丰富度降低会通过加剧现有的学习近视问题并引发新的近视来源来影响在线学习(OL)。我们首先在此方面进行理论阐述,然后详细说明可能减弱或加剧此类近视风险的应急因素。


Machine Learning and Myopia of Learning

机器学习与学习近视

(注:这里“学习近视”可能是“learning myopia”的直译,若有特殊语境需调整。)

Cognitive limitations with regard to information processing (Cyert & March, 1963; Simon, 1947, 2000) constrain humans to search “locally,” which can lead to myopic learning (Levinthal & March, 1993). The computational capabilities of ML can reduce some of these constraints. However, ML cannot address the underlying causes of myopia because that requires the ability to diagnose and resolve problems, both of which require substantive rationality that ML does not possess. As Lindebaum et al. (2020: 256) observed, “the detection and correction of errors depend on an escape from the strictures of formal rationality.” Hence, the key to solving myopia due to the limitations of human learning lies within the unique features of human learning. In the context of OL, these features translate into the diversity in routines and richness of background knowledge. 对信息处理的认知局限(Cyert & March, 1963;Simon, 1947, 2000)限制人类进行“局部”搜索,这可能导致短视学习(Levinthal & March, 1993)。机器学习的计算能力可以减轻部分此类限制。然而,机器学习无法解决近视的根本原因,因为这需要诊断和解决问题的能力,而这两者都需要实质性的合理性,而机器学习并不具备这种合理性。正如Lindebaum等人(2020:256)所观察到的,“错误的检测和纠正取决于对形式合理性束缚的突破。”因此,解决因人类学习局限性导致的近视问题的关键在于人类学习的独特特征。在在线学习(OL)的背景下,这些特征转化为常规方法的多样性和背景知识的丰富性。

A diversity of routine variants built on differing value criteria, expectations, and perceptions offers organizations a multiplicity of approaches and facilitates the substantive rationality needed to diagnose and respond to problems of myopia. In addition, routine diversity can help by increasing the organization’s resilience to errors. For instance, if the negative aspects of one variant are offset by another variant, then organizations that have both variants will likely outperform those with only one of them. ML risks eliminating these benefits by reducing the diversity in routines. 基于不同的价值标准、期望和认知构建的多样化常规变体,为组织提供了多种方法,并促进了诊断和应对近视问题所需的实质性合理性。此外,常规多样性还能通过提高组织对错误的韧性来发挥作用。例如,如果一种变体的负面因素被另一种变体抵消,那么同时拥有这两种变体的组织可能会比只拥有其中一种变体的组织表现更好。机器学习(ML)可能会通过减少常规的多样性来消除这些益处。

Similarly, the richness of background knowledge in routines enables the organization to apply substantive rationality to identify and address myopia. Because ML lacks contextual and causal knowledge, and its underlying decision rules are opaque, the organization cannot easily reproduce outcomes of an existing routine, or diagnose and act on problems. As Mintzberg (1994: 19) noted, “formal systems could certainly process more information, at least hard information. But they could never internalize it, comprehend it, synthesize it.” 同样,常规流程中丰富的背景知识使组织能够运用实质性理性来识别和解决近视问题。由于机器学习缺乏上下文和因果知识,且其底层决策规则不透明,组织无法轻易复制现有常规流程的结果,也无法诊断和处理问题。正如明茨伯格(1994:19)所指出的,“正式系统当然可以处理更多信息,至少是硬信息。但它们永远无法内化、理解和综合这些信息。”

These arguments imply that by reducing the diversity and richness of background knowledge in organizational routines, and consequently hindering the organization’s ability to diagnose and resolve problems of myopia, ML can induce myopia even in the absence of myopic behavior among humans. This myopia can be aggravated if ML selects routine variants (from the historical data) that are themselves the product of past myopic behavior by humans, and encodes and amplifies the negative aspects of those variants. Furthermore, this risk persists even if ML is updated over time, as the updating itself relies on data that have arisen from routine variants previously selected by ML. 这些论点表明,通过减少组织惯例中背景知识的多样性和丰富性,进而阻碍组织诊断和解决近视问题的能力,机器学习(ML)即使在人类没有近视行为的情况下也可能诱发近视。如果机器学习从历史数据中选择本身就是人类过去近视行为产物的惯例变体,并对这些变体的负面特征进行编码和放大,这种近视问题可能会加剧。此外,即使机器学习随时间更新,这种风险仍然存在,因为更新本身依赖于之前由机器学习选择的惯例变体产生的数据。

We now elaborate on these arguments in the context of three specific forms of learning myopia. 我们现在结合三种具体形式的学习性近视来详细阐述这些论点。

Ignoring the long run. A key risk in the learning process is temporal myopia, where organizations overlook distant times and choose solutions that are optimal in the short run but not so later (Levinthal & March, 1993). With humans, this myopia arises because short-run problems are often easier to specify, and decision-makers have an immediate payoff from solving them (Ahuja & Lampert, 2001). 忽视长期影响。学习过程中的一个关键风险是时间近视,即组织忽视长远,选择短期最优但长期并非如此的解决方案(Levinthal & March, 1993)。对于人类而言,这种近视的产生是因为短期问题往往更容易明确,而决策者从解决这些问题中能获得即时回报(Ahuja & Lampert, 2001)。

With ML, such myopia arises because it reduces the diversity in routines. ML strongly favors variants that perform well in the context of historical data and eliminates variants that do not conform to the data. This process of selection enhances the organization’s efficiency in contexts similar to the historical data, but decreases the organization’s ability to identify and react to longer-term problems that may be different from those in the historical data. Overfitting in ML exacerbates this tendency by emphasizing conformity to historical data at the cost of generalizing to other contexts (Choudhury et al., 2020). Furthermore, the greater nescience associated with ML can make it harder for organizations to identify and evaluate longer-term concerns, which requires decision-makers not only to understand how routines worked in the past but also to form expectations about how they may perform in the future (Dewey, 1922; Mintzberg, 1994). Thus, we have: 借助机器学习(ML),这种近视现象的出现是因为它降低了常规流程的多样性。机器学习强烈倾向于在历史数据背景下表现良好的变体,而淘汰不符合这些数据的变体。这一选择过程提高了组织在类似历史数据的情境中的效率,但削弱了组织识别和应对与历史数据中不同的长期问题的能力。机器学习中的过拟合现象加剧了这种倾向,它过度强调对历史数据的一致性遵循,却损害了对其他情境的泛化能力(Choudhury等人,2020)。此外,与机器学习相关的更多无知(nescience)使得组织更难识别和评估长期问题,这要求决策者不仅要理解常规流程过去是如何运作的,还要形成对其未来可能表现的预期(杜威,1922;明茨伯格,1994)。因此,我们得到:

Proposition 2a. Substituting human decision-making with machine learning may intensify the propensity of organizations to emphasize the short run over the long run. 命题2a。用机器学习替代人工决策可能会加剧组织倾向于重视短期而非长期的趋势。

Overlooking organizational interdependencies. Another form of learning myopia is spatial myopia, or focusing on spatially proximate aspects while ignoring spatially distant ones. In our context, one particular aspect that can be overlooked is interdependencies across routines. Organizational routines are often interdependent (Cohen & Bacdayan, 1994; Nelson & Winter, 1982), which requires decisionmakers to consider such interdependencies in their decisions. However, because spatially closer problems may be more salient and solvable (Levinthal & March, 1993), decision-makers in one routine may overly focus on their local routine and overlook interdependencies with other routines. 忽视组织间的相互依存性。另一种学习近视的形式是空间近视,即关注空间上邻近的方面而忽视空间上较远的方面。在我们的研究背景下,一个可能被忽视的特定方面是常规之间的相互依存性。组织常规通常是相互依存的(Cohen & Bacdayan,1994;Nelson & Winter,1982),这要求决策者在决策时考虑此类相互依存性。然而,由于空间上较近的问题可能更突出且更容易解决(Levinthal & March,1993),一个常规中的决策者可能会过度关注其本地常规,而忽视与其他常规的相互依存性。


In this respect, ML can be beneficial because it can discover some interdependencies that humans (and other software) cannot—specifically, it can discover weak correlations among many variables in large volumes of data. However, ML can identify only codifiable dependencies that are present in the historical data. Hence, ML will miss any interdependencies that require knowledge of the world outside the historical data, including an understanding of judgments made by humans (e.g., customers may decide to buy because of factors other than the discount offered via ML). It will also miss intertemporal dependencies that need foresight because ML relies on historical data. Relatedly, an important aspect of identifying interdependencies is assessing their appropriateness for the organization. ML cannot make that assessment, as it lacks the necessary judgment and understanding. Furthermore, the reduction in background knowledge due to ML is likely to make it harder for decision-makers to address problems arising from ignoring interdependencies by decreasing their understanding of interdependencies and causal interlinks therein. Thus, we propose: 在这方面,机器学习(ML)可能是有益的,因为它能够发现人类(和其他软件)无法发现的某些相互依赖关系——具体来说,它可以在大量数据中发现许多变量之间的微弱相关性。然而,机器学习只能识别历史数据中存在的可编码依赖关系。因此,机器学习会遗漏任何需要历史数据之外的世界知识的相互依赖关系,包括对人类判断的理解(例如,客户可能因为除了机器学习提供的折扣之外的其他因素而决定购买)。它还会遗漏需要远见的跨期依赖关系,因为机器学习依赖于历史数据。相关地,识别相互依赖关系的一个重要方面是评估它们对组织的适用性。机器学习无法进行这种评估,因为它缺乏必要的判断和理解。此外,由于机器学习导致的背景知识减少,可能会使决策者更难解决因忽视相互依赖关系而产生的问题,因为这会降低他们对相互依赖关系及其中的因果联系的理解。因此,我们建议:

Proposition 2b. Substituting human decision-making with machine learning may increase the organizational propensity to overlook interdependencies across routines. 命题2b。用机器学习替代人工决策可能会增加组织忽视常规间相互依赖关系的倾向。

Inability to predict extreme failures and successes. ML’s lower routine diversity and greater nescience also affect its ability to predict extreme failures and successes. Focusing first on such failures, humans tend to ignore failures in their learning (Denrell, 2003; Feldman & March, 1981; Levinthal & March, 1993). Such bias also occurs at the organizational level, as organizations often retain and promote individuals who are successful while pushing out those that fail (Levinthal & March, 1993). An analogous risk also affects extreme success. In particular, organizations may be not able to draw reliable conclusions about what leads to such success because they do not correctly interpret the success experience (Kim, Kim, & Miner, 2009), or they ignore the fact that success could be just due to luck (Denrell, Fang, & Liu, 2019). 无法预测极端的失败和成功。机器学习较低的常规多样性和更大的无知也会影响其预测极端失败和成功的能力。首先关注这类失败时,人类往往会在学习过程中忽视失败(Denrell, 2003;Feldman & March, 1981;Levinthal & March, 1993)。这种偏见也会在组织层面出现,因为组织往往会保留和提拔成功的人,同时淘汰失败的人(Levinthal & March, 1993)。类似的风险也会影响极端成功。特别是,组织可能无法就什么导致了这种成功得出可靠的结论,因为它们没有正确解读成功经验(Kim, Kim, & Miner, 2009),或者忽视了成功可能仅仅是运气的结果(Denrell, Fang, & Liu, 2019)。

ML can amplify these biases and induce such biases on its own. First, to the extent that failures were overlooked in the historical decisions made by humans, ML will possibly amplify that oversight if it selects routine variants that resulted in the overlooking of failures. This, in turn, would increase the risk of replicating such failures. Similarly, to the extent that misattributing the cause of extreme successes is reflected in the historical data, ML will echo such misattributions. This, in turn, will reduce the likelihood of selecting variants that lead to extremely successful outcomes. 机器学习可以放大这些偏见,并自行诱发此类偏见。首先,在人类过去的决策中存在被忽视的失误的情况下,如果机器学习选择了导致失误被忽视的常规变体,它可能会放大这种疏忽。这反过来会增加重复此类失误的风险。同样,在历史数据中存在将极端成功的原因错误归因的情况下,机器学习会重复这种错误归因。这反过来会降低选择能带来极端成功结果的变体的可能性。

Second, this risk persists even if organizations do not undersample failure or misattribute the causes of success. Based on their substantive rationality, background knowledge, and foresight (Lindebaum et al., 2020; Mintzberg, 1994), humans tend to avoid routine variants that would result in extreme failure. Because humans never, or rarely, select such variants, those variants will not appear, or will appear very infrequently, in the historical data. This paucity in the historical data makes it more likely that ML may mistakenly select such a variant when it encounters one, especially given its inability to understand the consequences of its actions. In the absence of human intervention, this mistake can occur even if ML automatically updates in response to environmental changes. That occurs because updating is based narrowly on codifiable data, which are likely to be inadequate to diagnose and resolve extreme failures (unlike smaller errors that ML may be able to self-correct). Moreover, even though organizations can correct errors in ML after they have been discovered, such corrections can be very costly. For instance, the recent “A-level fiasco” in the United Kingdom hurt many students when ML assigned them test scores based on factors that were beyond their control (Rao & McInerney, 2020). Moreover, identifying and responding to the errors themselves can be harder because of the reduction in routine diversity and background knowledge in the organization. In a similar vein, the paucity of extremely successful variants in a routine will also make it less likely for ML to be able to identify such variants accurately. To summarize, 其次,即使组织没有对失败进行欠采样或错误归因于成功的原因,这种风险仍然存在。基于其实质性合理性、背景知识和远见(Lindebaum等人,2020;明茨伯格,1994),人类倾向于避免那些会导致极端失败的常规变体。由于人类从不或很少选择这类变体,这些变体在历史数据中不会出现,或者出现的频率非常低。历史数据中的这种匮乏使得机器学习(ML)在遇到这类变体时更有可能错误地选择它们,尤其是考虑到它无法理解其行为的后果。在没有人类干预的情况下,即使机器学习能够根据环境变化自动更新,这种错误仍然可能发生。这是因为更新仅基于可编码的数据,而这些数据可能不足以诊断和解决极端失败(与机器学习可能能够自我纠正的较小错误不同)。此外,尽管组织可以在发现错误后纠正机器学习中的错误,但这种纠正可能代价高昂。例如,英国最近发生的“A级考试惨败”事件中,机器学习根据超出学生控制范围的因素为他们分配考试分数,导致许多学生受到伤害(Rao & McInerney,2020)。此外,由于组织中常规多样性和背景知识的减少,识别和响应错误本身也会变得更加困难。同样,常规中极其成功的变体的匮乏也会降低机器学习准确识别这类变体的可能性。总之,

Proposition 2c. Substituting human decision-making with machine learning may increase the risk of selecting variants that cause extreme failures. 命题2c。用机器学习替代人工决策可能会增加选择导致极端失败的变体的风险。

Proposition 2d. Substituting human decision-making with machine learning may reduce the likelihood of selecting variants that lead to extremely successful outcomes. 命题2d。用机器学习替代人类决策可能会降低选择导致极其成功结果的变体的可能性。

Contingencies Shaping the Impact of Machine Learning on Organizational Learning

影响机器学习对组织学习影响的意外因素

The risk of myopia due to substituting human decision-making with ML is not likely to be uniform across all routines and contexts, since these are likely to vary in their need for routine diversity and knowledge richness. On the one hand, when the need for routine diversity and knowledge richness is not high, the benefits of ML may offset the advantages from having greater routine diversity and knowledge richness. Indeed, organizations incur costs in developing and maintaining routine diversity and knowledge richness. Some variants and knowledge may not be relevant to organizational performance in a given context and, hence, may not generate any benefits for the organization in that context. This is particularly true as fallibilities in human learning, including inflexibility to change, may cause organizations to retain poorly aligned routines. In this respect, the faster reduction in routine diversity achieved by ML can benefit by eliminating variants that are not well-suited to that context and freeing up resources that can be used for OL elsewhere. ML can also benefit by helping organizations assess environmental changes more quickly and discover some patterns that are not apparent to humans. On the other hand, as described in the foregoing propositions, when the need for routine diversity and background knowledge is high, substituting human decision-making with ML can increase the risk of myopia. We elaborate on this by identifying some contingencies that affect the trade-off between these benefits of ML and the benefits of greater routine diversity and background knowledge. 由于用人机学习(ML)替代人类决策而导致近视的风险不太可能在所有常规和情境中均匀分布,因为这些常规和情境在对常规多样性和知识丰富性的需求方面可能存在差异。一方面,当对常规多样性和知识丰富性的需求不高时,机器学习的优势可能会抵消拥有更大常规多样性和知识丰富性所带来的好处。事实上,组织在开发和维护常规多样性和知识丰富性方面会产生成本。某些变体和知识在特定情境下可能与组织绩效无关,因此在该情境中可能不会为组织带来任何收益。这一点在人类学习的缺陷(包括对变化的不灵活性)可能导致组织保留与目标不匹配的常规时尤为明显。在这方面,机器学习实现的常规多样性更快减少可以通过消除不适合该情境的变体,并释放可在其他地方用于组织学习(OL)的资源来受益。机器学习还可以通过帮助组织更快地评估环境变化并发现一些人类不易察觉的模式而受益。另一方面,如前所述的命题所述,当对常规多样性和背景知识的需求很高时,用人机学习替代人类决策会增加近视的风险。我们通过识别一些影响机器学习优势与更大常规多样性和背景知识优势之间权衡的意外情况来详细阐述这一点。


Rate and magnitude of unexpected environmental changes. An important context that previous research has highlighted is environmental uncertainty, or the rate and extent of unexpected changes in the environment (Becker, 2004; Galbraith, 1973; Lawrence & Lorsch, 1967; Milliken, 1987; Tosi, Aldag, & Storey, 1973). Unexpected changes may not only alter the state of the environment in a way that affects the performance of different routine variants but can also modify the underlying causeeffect relationships in routines (Milliken, 1987). 意外环境变化的速率和幅度。先前研究强调的一个重要背景是环境不确定性,即环境中意外变化的速率和程度(Becker, 2004; Galbraith, 1973; Lawrence & Lorsch, 1967; Milliken, 1987; Tosi, Aldag, & Storey, 1973)。意外变化不仅可能以影响不同常规变体性能的方式改变环境状态,还可能修改常规中潜在的因果关系(Milliken, 1987)。

In our context, the rate and magnitude of unexpected environmental change affect the relative advantage of greater routine diversity and background knowledge, and thus the risk of learning myopia. The low uncertainty in static and slowly changing environments reduces the need for organizations to search for additional information (Becker, 2004; Galbraith, 1973, 1974) and provides more time for learning. Thus, in such situations, organizations are likely to converge to a stable set of routines with little variation, irrespective of whether humans or machines are involved. Thus, ML’s disadvantage with regard to routine diversity will be minimal. Similarly, when the magnitude of change is small the environmental uncertainty is low, and underlying cause—effect relationships in routines do not change significantly (Lawrence & Lorsch, 1967; Tosi et al., 1973). Thus, ML trained in one environment is likely to be relevant in new environments and not cause a significant increase in organizational nescience. Hence, in such cases, the benefits from ML likely outweigh the advantages of greater routine diversity and knowledge richness. 在我们的研究背景下,意外环境变化的速率和幅度会影响更大常规多样性和背景知识的相对优势,进而影响学习近视的风险。在静态和缓慢变化的环境中,不确定性较低,这减少了组织搜索额外信息的需求(Becker, 2004; Galbraith, 1973, 1974),并为学习提供了更多时间。因此,在这种情况下,无论是否涉及人类或机器,组织都可能收敛到一组变化极小的稳定常规。因此,机器学习在常规多样性方面的劣势将降至最低。同样,当变化幅度较小时,环境不确定性较低,常规中的潜在因果关系不会发生显著变化(Lawrence & Lorsch, 1967; Tosi et al., 1973)。因此,在一种环境中训练的机器学习模型在新环境中可能仍然适用,不会显著增加组织的无知状态。因此,在这种情况下,机器学习的优势可能超过更大常规多样性和知识丰富性的优势。

The relative disadvantage of ML’s lack of background knowledge also diminishes for routines in rapidly changing environments. In such environments, organizations and decision-makers are likely to be time-constrained due to the greater informationprocessing needs (Galbraith, 1973, 1974), their slower information-processing speed, and difficulties in predicting the possible effects of environmental changes (Milliken, 1987). Thus, their background knowledge is less likely to be helpful. In addition, human inflexibility to change can slow down organizational response in fast-changing environments. In such contexts, the computational superiority of ML and its greater flexibility (within its narrow model parameters) may enable it to identify and adapt more quickly to rapid environmental changes compared with human learning. Similarly, with large changes that significantly alter the environmental landscape, human inflexibility to change is likely to impede any significant transformations of organizations needed to respond to such changes. Moreover, neither ML nor human knowledge is likely to be relevant, so that the human advantage diminishes. Consistent with this, Dubey, Agrawal, Pathak, Griffiths, and Efros (2018) found that masking all prior background knowledge humans may have about video games (e.g., gravity causes things to fall), which can be considered a large change, results in a greater deterioration in human performance than does masking only some of their prior knowledge. ML, on the other hand, does not exhibit this effect. 在快速变化的环境中,机器学习缺乏背景知识的相对劣势也会减弱。在这类环境中,由于信息处理需求增加(Galbraith, 1973, 1974)、信息处理速度较慢,以及难以预测环境变化的可能影响(Milliken, 1987),组织和决策者很可能受到时间限制。因此,他们的背景知识不太可能发挥作用。此外,人类在适应变化方面的不灵活性会减缓组织在快速变化环境中的响应速度。在这种情况下,机器学习的计算优势及其更大的灵活性(在其狭窄的模型参数范围内)可能使其能够比人类学习更快地识别并适应快速的环境变化。同样,当发生重大变化显著改变环境格局时,人类适应变化的不灵活性可能会阻碍组织为应对此类变化所需的任何重大转型。此外,机器学习和人类知识都不太可能相关,因此人类的优势会减弱。与此一致的是,Dubey、Agrawal、Pathak、Griffiths和Efros(2018)发现,屏蔽人类对电子游戏可能拥有的所有背景知识(例如,重力使物体下落)——这可以被视为一种重大变化——会导致人类表现的恶化程度比仅屏蔽部分先前知识更大。另一方面,机器学习不会表现出这种效应。

In contrast, when the rate of change is moderate, humans can draw upon their diverse experiences and background knowledge and respond to the change. Similarly, with changes of moderate magnitude, the greater routine diversity and background knowledge associated with human learning are likely to continue to be relevant, while ML is likely to suffer a performance decline due to its greater context specificity and nescience. Thus, the effects of a change and responses to it are likely to differ depending on whether ML or human learning is involved. To summarize, 相比之下,当变化速率适中时,人类可以利用其多样的经验和背景知识来应对变化。同样,对于中等程度的变化,与人类学习相关的更大的常规多样性和背景知识可能仍然具有相关性,而机器学习由于其更强的上下文特异性和无知性,其性能可能会下降。因此,变化的影响及其应对方式可能因涉及的是机器学习还是人类学习而有所不同。综上所述,

Proposition 3a. The increase in myopia due to machine learning is likely to be highest at moderate rates and magnitudes of environmental change. 命题3a。由于机器学习导致的近视增加,在环境变化的中等速率和幅度下可能最高。

Corollary. Substituting human decision-making with machine learning is likely to be most beneficial to organizational learning (a) in slow or static environments or (b) at high rates and magnitudes of environment change. 推论。用机器学习替代人类决策在以下场景中对组织学习最有益:(a) 在缓慢或静态的环境中,或 (b) 在环境变化速率和规模较高的情况下。


Complexity of routines. The complexity of relationships between inputs and outcomes is likely to increase the extent of information processing required (Galbraith, 1973). Hence, compared with simpler routines, ML is likely to be slower in evaluating and selecting the best variants in complex routines that have more complex, nonlinear relationships between inputs and outcomes. Consequently, the reduction in diversity due to ML is likely to be slower in complex routines than in simpler routines. However, ML’s computational superiority over humans is likely to be an even bigger advantage when evaluating complex, nonlinear relationships between inputs and outcomes. Hence, relative to human learning, ML is likely to lead to a greater reduction of diversity in complex routines than in simpler routines. This is likely to be particularly true if these complexities arise due to greater social interactions and differences in contexts. In such cases, human learning is likely to spawn a greater diversity of variants and richness of background knowledge in complex routines than in simpler routines. In contrast, by selecting a small set of models (and thereby possibly favoring one context over others) and eliminating contextual knowledge, ML is likely to lead to greater myopia in complex routines. This effect is likely to be higher if complex routines also involve significant noncodifiable aspects. Further, if complex routines also involve a long delay from actions to outcomes, ML cannot update decision models frequently, and thus can exacerbate myopia. In contrast, learning in simpler routines requires less knowledge richness, and ML can benefit OL by rapidly identifying the best variants without the high cost of reducing routine diversity and knowledge richness. To summarize: 例程的复杂性。输入与结果之间关系的复杂性可能会增加所需的信息处理程度(加尔布雷斯,1973)。因此,与更简单的例程相比,机器学习在评估和选择复杂例程中最佳变体时可能会更慢,因为这些例程的输入与结果之间存在更复杂的非线性关系。因此,在复杂例程中,由于机器学习导致的多样性减少可能比在简单例程中更慢。然而,当评估输入与结果之间复杂的非线性关系时,机器学习相对于人类的计算优势可能是一个更大的优势。因此,与人类学习相比,在复杂例程中,机器学习可能会比在简单例程中导致更大程度的多样性减少。如果这些复杂性是由于更多的社会互动和情境差异而产生的,这一点可能尤其正确。在这种情况下,人类学习在复杂例程中可能会产生更多样化的变体和更丰富的背景知识,而简单例程中则不会。相比之下,通过选择一小部分模型(从而可能偏向一种情境而非其他情境)并消除情境知识,机器学习可能会在复杂例程中导致更大的短视。如果复杂例程还涉及大量不可编码的方面,这种效应可能会更强。此外,如果复杂例程还涉及从行动到结果的长时间延迟,机器学习无法频繁更新决策模型,从而可能加剧短视。相比之下,简单例程中的学习需要的知识丰富度较低,机器学习可以通过快速识别最佳变体,在不高成本降低例程多样性和知识丰富度的情况下,为在线学习(OL)带来益处。总结:

Proposition 3b. The reduction in routine diversity and background knowledge, and hence the increase in myopia, due to machine learning is likely to be higher for complex routines than for simpler routines. 命题3b。由于机器学习导致的常规多样性和背景知识的减少,进而导致近视率上升,这种影响在复杂常规中可能比在简单常规中更为显著。

Universality of cause-effect relationships. To the extent that myopia arises from inadequate data (e.g., about extreme outcomes), organizations may be able to address this by aggregating data across contexts. For example, a disease may be rare, and an organization may not have enough historical data to allow ML to diagnose accurately. However, because the underlying causeeffect relations (about what causes that disease) may be similar across contexts, organizations can aggregate historical data—for instance, from other parts of the organization or by collaborating with other organizations—to increase the volume of data. With more data, organizations can leverage ML’s speed and ability to discover patterns and (partially) compensate for the reduction in routine diversity and knowledge richness arising from ML. However, if routines rely on cause—effect relations that are not as universal (e.g., a routine to select an entrepreneur for funding or a routine to select an individual for leadership training), historical data cannot be aggregated across contexts. In such cases, routine variants selected (by ML) for one context are likely to be ineffective in others, and the error in the statistical models of ML is likely to be higher. In contrast, the richness of background knowledge and diversity in routines associated with human learning can enable the flexibility and substantive rationality needed to learn and respond across multiple contexts. Thus, we have: 因果关系的普遍性。在近视源于数据不足(例如,关于极端结果的数据)的程度上,组织或许能够通过跨情境聚合数据来解决这一问题。例如,某种疾病可能较为罕见,组织可能没有足够的历史数据来让机器学习准确诊断。然而,由于潜在的因果关系(关于导致该疾病的因素)可能在不同情境下相似,组织可以聚合历史数据——例如,来自组织的其他部门或通过与其他组织合作——以增加数据量。有了更多数据,组织可以利用机器学习的速度和发现模式的能力,并(部分地)弥补因机器学习而减少的常规多样性和知识丰富性。但是,如果常规依赖的因果关系并非那么普遍(例如,选择企业家进行融资的常规或选择个人进行领导力培训的常规),历史数据就无法在不同情境下聚合。在这种情况下,机器学习为某一情境选择的常规变体在其他情境中可能无效,并且机器学习统计模型中的误差可能更高。相比之下,与人类学习相关的背景知识的丰富性和常规中的多样性,可以使组织在多个情境中学习和响应所需的灵活性和实质性合理性成为可能。因此,我们得出:

Proposition 3c. The less universal the cause-effect relationships in a routine, the higher the possibility of machine learning causing and amplifying myopia. 命题3c。常规中因果关系越不普遍,机器学习导致并加剧近视的可能性就越高。

Learning dependence among routines. Learning about one routine may depend on learning about other routines. In particular, human learning about tasks is developed by performing and learning about a series of simpler, related tasks and subcomponents (Krueger & Dayan, 2009). For instance, individuals need to learn basic arithmetic before they can learn calculus. In organizations, we see this when more consequential decisions are delegated to more experienced decision-makers who have likely learned from making simpler decisions with less at stake. In such cases, ML may weaken OL by eliminating learning opportunities for humans in the dependent routines. This is likely to be particularly true if the learning is not codifiable, and organizations cannot substitute such “on-thejob” learning with other learning mechanisms (e.g., an academic course). Thus, substituting human decision-making with ML in a routine may not only lower routine diversity and background knowledge in that routine but induce learning myopia in other routines that depend on it for learning, even if the organization does not substitute ML for human decision-making in the dependent routines. In contrast, such a risk is lower in routines that have no learning dependencies. Hence, we state: 例程之间的学习依赖。学习一个例程可能依赖于学习其他例程。特别是,人类对任务的学习是通过执行和学习一系列更简单、相关的任务及子组件来发展的(Krueger & Dayan,2009)。例如,个人需要先学习基础算术才能学习微积分。在组织中,当更重要的决策被委托给更有经验的决策者时,我们可以看到这一点,这些决策者很可能通过处理风险较小的简单决策而获得了经验。在这种情况下,机器学习(ML)可能会通过消除人类在依赖例程中的学习机会来削弱组织学习(OL)。如果这种学习不可编码,并且组织无法用其他学习机制(例如学术课程)替代这种“在职”学习,那么这种情况可能尤其严重。因此,在一个例程中用机器学习替代人类决策不仅可能降低该例程的多样性和背景知识,还可能在依赖它进行学习的其他例程中引发学习近视,即使组织没有在依赖例程中用机器学习替代人类决策。相比之下,在没有学习依赖关系的例程中,这种风险较低。因此,我们指出:

Proposition 3d. The increase in myopia due to machine learning is likely to be higher in the presence of learning dependencies among routines. 命题3d。由于机器学习导致的近视增加,在常规操作存在学习依赖关系的情况下可能会更高。

Organizations use a combination of routines to accomplish tasks (Pavitt, 2002), and group clusters of similar tasks into functions. Hence, our foregoing propositions imply that ML is likely to increase the risk of myopia in tasks and functions that face moderate environmental changes and involve routines with complex, varied cause-effect relationships. Similarly, due to their greater learning dependence, tasks that link different functions (e.g., product development tasks linking marketing and research and development) may be at a higher risk of myopia if ML substitutes for human decision-making in those functions. Furthermore, the last proposition implies that ML not only will affect people whose jobs are substituted with ML but also may increase myopia in decision-makers who rely on learning from simpler decisions to develop their skills and judgment, especially because such decisions are more likely to be substituted with ML. 组织会通过一系列常规方法来完成任务(Pavitt,2002),并将类似任务的集群归类为不同职能。因此,我们之前的假设表明,在面临适度环境变化且涉及具有复杂、多样因果关系的常规任务的职能和任务中,机器学习(ML)可能会增加近视风险。同样,由于这些任务对学习的依赖性更强,那些关联不同职能的任务(例如,连接营销和研发的产品开发任务)如果机器学习替代了这些职能中的人类决策,可能会面临更高的近视风险。此外,最后一个假设表明,机器学习不仅会影响那些工作被机器学习替代的人,还可能增加依赖从更简单决策中学习来发展自身技能和判断力的决策者的近视风险,尤其是因为这类决策更有可能被机器学习替代。


SELECTION EFFECT OF MACHINE LEARNING: A “PROOF-OF-CONCEPT” SIMULATION

机器学习的选择效应:一项“概念验证”模拟

We have contrasted several features of human learning and ML and presented two mechanisms that link ML to OL. Motivated by Polos, Adner, Ryall, and Sorenson (2009), we delve deeper into one aspect—ML’s ability to select routine variants more quickly than can be accomplished by humans—and present a simple proof-of-concept simulation. By focusing on only one of the many differences between human learning and ML, we illustrate (in a stylized way) that ML can cause some of the theorized consequences even if human learning is not biased or cognitively limited in any way other than its slower speed. Further, because it is a proofof-concept, we do not derive a complete solution over all possible values of the simulation parameters, but only seek to show that such consequences can occur over some part of the parameter space. 我们对比了人类学习和机器学习的几个特征,并提出了两种将机器学习与组织学习(OL)联系起来的机制。受 Polo、Adner、Ryall 和 Sorenson(2009)的启发,我们深入探讨了一个方面——机器学习能够比人类更快地选择常规变体——并展示了一个简单的概念验证模拟。通过仅关注人类学习与机器学习之间众多差异中的一个,我们(以一种程式化的方式)说明,即使人类学习除了速度较慢之外没有任何其他偏见或认知局限,机器学习也可能引发一些理论化的后果。此外,由于这是一个概念验证,我们没有针对模拟参数的所有可能值推导出完整的解决方案,而只是试图表明在参数空间的某些部分中这些后果可能发生。

Routine Diversity and Organizational Learning

常规多样性与组织学习

Our simulation model structure is broadly based on Yi et al. (2016). We assume that an organization consists of five routines. Initially, each routine consists of 20 variants, which can be considered to arise from the features of human learning discussed earlier as well as the initial uncertainty among decision-makers about the optimal variants. The number of variants falls over time as the organization learns and eliminates variants that do not align well with the environment. The environment is characterized by a predefined landscape, modeled as a “fitness score matrix” containing the “fitness scores” for each of these 100 variants that reflect the extent to which each variant contributes to organizational performance (with higher fitness indicating better performance). 我们的模拟模型结构主要基于易等人(2016)的研究。我们假设一个组织由五个常规流程组成。最初,每个常规流程包含20种变体,这些变体可以被视为源于前面讨论的人类学习特征,以及决策者对最优变体的初始不确定性。随着组织的学习以及淘汰与环境不太契合的变体,变体的数量会随时间减少。环境的特征是一个预定义的“适应度景观”,被建模为一个“适应度评分矩阵”,其中包含这100种变体各自的“适应度分数”,这些分数反映了每个变体对组织绩效的贡献程度(分数越高表示绩效越好)。

We model OL as arising from probabilistically eliminating low-performing variants within each routine. Specifically, we assume that in every period, human learning identifies variants in the bottom fifth percentile of performance within a routine and eliminates them with an $8 0 %$ success rate. Given our interest in the implications of ML’s speed, we do not model a specific type of ML. Instead, for illustrative purposes, we assume that ML is eight times faster than human learning in eliminating misaligned variants (inferences using other values are similar). We assume that any eliminated variants are permanently lost. For simplicity, we do not model additions of new variants. 我们将 OL 建模为在每个常规流程中概率性地淘汰表现不佳的变体。具体而言,我们假设在每个时期,人类学习会识别出常规流程中表现处于底部五分之一百分位的变体,并以 80% 的成功率将其淘汰。鉴于我们对机器学习速度影响的关注,我们不建模特定类型的机器学习。相反,为了说明目的,我们假设机器学习在淘汰错位变体方面的速度是人类学习的八倍(使用其他数值进行的推断结果类似)。我们假设任何被淘汰的变体都会永久丢失。为简化起见,我们不建模新变体的添加。

As learning occurs and misaligned routine variants are eliminated, organizational performance (the mean fitness score of variants in existence) improves, and diversity of routines as measured by the number of active variants decreases. This effect is shown in Panels A and B of Figure 2. Over time, both human learning and ML reach the same level of routine diversity and performance, having weeded out all variants but those with the highest fitness scores. Thus, in this case, ML is unambiguously superior to human learning as the organization benefits from the rapid reduction in routine diversity achieved by ML. This benefit of ML increases as the speed of ML rises. 随着学习的进行和不一致的常规变体被消除,组织绩效(现有变体的平均适应度得分)得到提升,而通过活跃变体数量衡量的常规多样性则下降。这一效果如图2的面板A和B所示。随着时间的推移,人类学习和机器学习都达到了相同水平的常规多样性和绩效,因为它们淘汰了所有变体,只保留了适应度得分最高的变体。因此,在这种情况下,机器学习明显优于人类学习,因为该组织从机器学习实现的常规多样性的快速减少中获益。随着机器学习速度的提高,这种优势也会增加。

Machine Learning and Ignoring Organizational Interdependencies Across Routines

机器学习与忽视跨例程的组织相互依赖

(注:这里“Routines”在学术语境中常译为“例程”,但也可根据具体领域调整为“流程”“程序”等,此处保留“例程”以贴合机器学习中对重复操作的常见表述)

We model within-organizational interdependencies in routines as the degree to which the performance of a routine variant depends on the presence of other routines and their variants. In a “fully interdependent” case, we assume that each dyad of routine variants contributes to organizational performance, and that such dyadic contributions to organizational performance are lost if either of the two variants is eliminated. Therefore, eliminating a variant affects organizational performance directly and indirectly through its effect on other variants. With “no interdependencies,” we assume that the performance contribution of each routine variant is independent of other variants. 我们将组织内例程中的相互依赖关系建模为一个例程变体的绩效对其他例程及其变体存在的依赖程度。在“完全相互依赖”的情况下,我们假设每个例程变体对组织绩效的贡献是存在的,并且如果两个变体中的任何一个被消除,这种二元贡献对组织绩效的影响将消失。因此,消除一个变体将通过其对其他变体的影响直接和间接地影响组织绩效。在“无相互依赖”的情况下,我们假设每个例程变体的绩效贡献独立于其他变体。

Panel C in Figure 2 illustrates the case of full interdependence in routines with no environmental changes. It can be seen that ML has a short-term advantage but performs poorly over the long run, which is consistent with Proposition 2b. In the context of the simulation, this occurs because ML rapidly eliminates some variants that contribute to the performance of other routines. Human learning, on the other hand, by eliminating variants more slowly, allows for the discovery of interdependencies. 图2中的面板C展示了在无环境变化的情况下,例程中完全相互依赖的情形。可以看出,机器学习具有短期优势,但长期表现不佳,这与命题2b一致。在模拟情境中,这一现象的发生是因为机器学习会迅速淘汰一些对其他例程性能有贡献的变体。相反,人类学习通过更缓慢地淘汰变体,能够发现相互依赖关系。


FIGURE 2 Selection Effect of Machine Learning (Insights from the Simulation)
图2 机器学习的选择效果(模拟结果洞察)

on 100 model runs along with a one-standard-deviation band. Panel D assumes that $1 0 %$ of fitness scores change every 20 periods, while Panel E presents a case in which $9 0 %$ oore chang every 0 peris. aneldisplays th mean relativ performan ML tuma ln ing for different magnitudes ( $X ^ { \ast }$ axis) and frequencies $\overset { \cdot } { y }$ axis) of environmental shocks. When the value of relative performance is larger than 0 zero (blue, pink, and dark green areas). 在100次模型运行中,同时带有一个标准差范围。面板D假设10%的适应度分数每20个周期变化一次,而面板E展示了90%的适应度分数每0个周期变化一次的情况。面板显示了不同环境冲击的幅度(X*轴)和频率(y轴)下的平均相对表现。当相对表现的值大于0(蓝色、粉色和深绿色区域)时。

Machine Learning and Ignoring the Long Run

机器学习与忽视长期影响

(注:此处“长期”指“long run”,即“长期”,无实际占位符或需保留的特殊内容,直接翻译标题。)

修正后仅输出指定部分(按结构规则,输入为标题,输出对应翻译标题):

机器学习与忽视长期影响

We model environmental changes as exogenous shocks that randomly change a certain proportion of fitness scores with bigger shocks changing a larger proportion of fitness scores. We also vary the periodicity of shocks to model the rate of environmental change. We assume that there are no interdependencies so as to isolate the effect of environmental changes. 我们将环境变化建模为外生冲击,这些冲击随机改变一定比例的适应度分数,较大的冲击会改变更大比例的适应度分数。我们还会改变冲击的周期性,以模拟环境变化的速率。我们假设不存在相互依赖关系,以便隔离环境变化的影响。

Panel D of Figure 2 presents a shock of “moderate” magnitude, while Panel E presents a shock of “large” magnitude. There are some key differences between the ML and human learning patterns. As in the baseline case, the faster speed of ML enables it to outperform human learning before the environment shock (by eliminating more misfit variants). After an environmental shock, organizational performance drops for both ML and human learning because the routine variants selected for the previous environment are not as well-aligned with the new environment. However, the performance drop is larger for ML (e.g., after the first shock, performance drops from 0.95 to 0.87 in the case of ML, compared with a drop from 0.82 to 0.77 for human learning, consistent with Proposition 2a). This occurs because ML rapidly selects variants that are well-aligned with the old environment, but those variants are not as suited to the new environment. In contrast, the slow speed of human learning retains more variants, some of which are suited for the new environment. This difference between human learning and ML intensifies as the speed of ML increases. 图2的Panel D呈现了“中等”强度的冲击,而Panel E呈现了“大”强度的冲击。机器学习(ML)与人类学习模式之间存在一些关键差异。与基准情况一样,机器学习的更快速度使其能够在环境冲击前超越人类学习(通过淘汰更多不匹配的变体)。环境冲击后,机器学习和人类学习的组织绩效均出现下降,因为为先前环境选择的常规变体与新环境的契合度较低。然而,机器学习的绩效下降幅度更大(例如,第一次冲击后,机器学习的绩效从0.95降至0.87,而人类学习的绩效从0.82降至0.77,这与命题2a一致)。这是因为机器学习会迅速选择与旧环境高度契合的变体,但这些变体并不太适合新环境。相比之下,人类学习的较慢速度保留了更多变体,其中一些适合新环境。随着机器学习速度的加快,人类学习与机器学习之间的这种差异会加剧。

Over the long term, when the magnitude of shocks is moderate (Figure 2, Panel D), the greater routine diversity of human learning results in its outperforming ML. However, when the shocks are large (Panel E), the higher diversity of human learning does not offer any advantage (consistent with the corollary to Proposition 3a). Consistent with this, Panel F of Figure 2 shows that human learning is better at moderate magnitudes of environmental shocks (blue, pink, and green areas). ML has an advantage when the magnitude of shocks is either high or low. ML also has an advantage at high rates of change (Panel F). Intuitively, this occurs because, at high rates of change, ML’s higher speed selects variants and improves the organization’s alignment with the environment more rapidly than can be achieved by humans. Thus, our results show how ML’s speed in selecting variants can benefit OL under some conditions but lead to myopia in others. Further, it shows that limitations of human cognition (other than its slower speed) are not necessary for ML to induce myopia and hurt organizational performance. 从长期来看,当冲击的强度适中时(图2,面板D),人类学习更高的常规多样性使其表现优于机器学习。然而,当冲击较大时(面板E),人类学习的更高多样性并无优势(这与命题3a的推论一致)。与此一致的是,图2的面板F显示,人类学习在环境冲击强度适中时表现更佳(蓝色、粉色和绿色区域)。当冲击强度高或低时,机器学习具有优势。机器学习在高变化率时也具有优势(面板F)。直观地说,这是因为在高变化率下,机器学习更快的速度能够更快地选择变体并提高组织与环境的一致性,而这一点人类无法实现。因此,我们的研究结果表明,在某些条件下,机器学习选择变体的速度可以使组织学习(OL)受益,但在其他情况下则会导致短视。此外,研究还表明,除了速度较慢之外,人类认知的局限性并非机器学习导致短视并损害组织绩效的必要条件。


DISCUSSION

讨论

Motivated by recent concerns about ML (Broussard, 2018; Smith, 2019), we have theorized about how substituting human decision-making with ML may affect OL. In particular, we have theorized that such substitution may jeopardize the diversity and richness of background knowledge in organizational routines due to the distinct differences between how humans and ML encode past experience. We have theorized that these changes, in turn, risk the possibility of organizations overlooking the long run, interdependencies across routines, and extreme outcomes, and have laid out important contingencies that may amplify or mute the risk of such myopia. 受近期对机器学习(ML)的担忧推动(Broussard, 2018; Smith, 2019),我们对用机器学习替代人类决策可能如何影响组织学习(OL)进行了理论探讨。具体而言,我们推测这种替代可能会危及组织惯例中背景知识的多样性和丰富性,因为人类和机器学习编码过去经验的方式存在显著差异。我们还推测,这些变化反过来可能使组织忽视长期发展、惯例间的相互依存关系以及极端结果,并阐述了可能加剧或减轻这种短视风险的重要权变因素。

Our study contributes to the literature on OL. Combining calls for theorizing on the perils and promises of algorithmic decision-making (Lindebaum et al., 2020) and for research on how new technologies affect OL (Argote, 2011), our study offers one of the first systematic theorizations of the impact of ML on OL. Building on Choudhury et al. (2020) and Lindebaum et al. (2020), we show how the differences between human learning and ML translate into the two underlying mechanisms that link ML to OL—diversity and richness of background knowledge in organizational routines. For studies of learning myopia (Ahuja & Lampert, 2001; Denrell, 2003; Denrell & Le Mens, 2020; Levinthal & March, 1993, 1988), we elaborate on how ML, with its unique combination of characteristics, can not only amplify existing myopia arising from human fallibilities but also introduce myopia even in the absence of such fallibilities. Combining this with the elaboration of related contingencies, our theorization highlights that although ML can benefit OL in some situations, it risks significantly impoverishing OL in many others. 我们的研究为组织学习(OL)领域的文献做出了贡献。结合对算法决策的风险与机遇进行理论化的呼吁(Lindebaum等人,2020)以及对新技术如何影响组织学习的研究(Argote,2011),我们的研究首次系统地理论化了机器学习(ML)对组织学习的影响。基于Choudhury等人(2020)和Lindebaum等人(2020)的研究,我们揭示了人类学习与机器学习之间的差异如何转化为将机器学习与组织学习联系起来的两个潜在机制——组织惯例中背景知识的多样性和丰富性。对于学习近视(Ahuja & Lampert,2001;Denrell,2003;Denrell & Le Mens,2020;Levinthal & March,1993,1988)的研究,我们详细阐述了机器学习如何凭借其独特的特征组合,不仅会放大因人类自身缺陷而产生的既有近视倾向,还会在没有此类缺陷的情况下引入新的近视问题。结合对相关权变因素的阐述,我们的理论化强调,尽管机器学习在某些情况下可以惠及组织学习,但在许多其他情况下,它却可能显著削弱组织学习的能力。

This article also contributes substantively to recent conversations about the impact of ML on organizations (e.g., Cowgill & Tucker, 2019; Lindebaum et al., 2020). In particular, it highlights an important link between ML and routines that has not been emphasized so far—that the statistical variations in historical data arise from within-organizational-routine variations, and hence that model selections made by ML can be interpreted as selecting among routine variants. Thus, it offers a new theoretical approach to linking the impersonal data used by ML to the underlying organizational mechanisms that generate those data. Another contribution lies in our highlighting of the ML selection effect—the accelerated reduction in routine diversity relative to human learning. While algorithmic bias, overfitting, reliance on statistical correlations, and the opacity of ML have been discussed before (Choudhury et al., 2020; Lindebaum et al., 2020; Whittaker et al., 2018), to our knowledge we are the first to highlight intensified selection (among routine variants) and its attendant implications as an important effect of substituting human decision-making with ML. In this respect, our simulation also provides another pathway for scholars to analyze the impact of ML on OL. Furthermore, because our theorization follows from the features of human learning and ML, our arguments can be generalized to technologies with similar features to those of ML. 这篇文章还实质性地推动了近期关于机器学习对组织影响的讨论(例如,Cowgill & Tucker, 2019;Lindebaum et al., 2020)。特别是,它强调了机器学习与组织常规之间的一个重要联系,这一点此前尚未得到强调——即历史数据中的统计变化源于组织内部常规的变化,因此机器学习所做的模型选择可以被解读为在常规变体中进行选择。因此,它为将机器学习使用的非个人数据与其生成这些数据的潜在组织机制联系起来提供了一种新的理论方法。另一个贡献在于我们强调了机器学习的选择效应——相对于人类学习,常规多样性的加速减少。虽然算法偏见、过拟合、对统计相关性的依赖以及机器学习的不透明性此前已有讨论(Choudhury et al., 2020;Lindebaum et al., 2020;Whittaker et al., 2018),但据我们所知,我们是首次将强化选择(在常规变体中)及其伴随影响作为用机器学习替代人类决策的一个重要效应加以强调。在这方面,我们的模拟还为学者们分析机器学习对组织学习(OL)的影响提供了另一条途径。此外,由于我们的理论化是基于人类学习和机器学习的特征,我们的论点可以推广到具有与机器学习类似特征的技术中。

We now discuss how our theorization can help extend prior research, starting with how our model can be tested. 我们现在讨论我们的理论化如何帮助拓展先前的研究,首先从我们的模型如何被测试开始。

Testing Our Model

测试我们的模型

Experimental studies are particularly well-suited to directly test our propositions. Such studies would take a longitudinal approach with repeated execution of decision-making tasks among treatment and control groups. A possible sequence of experiments would involve an initial period of decision-making with only human decision-makers, followed by a switch to ML decision-making in some of the groups (with ML being trained on the data compiled during the initial human decision-making period). Measuring change in routine diversity, background knowledge, and performance will provide the necessary information to evaluate our propositions. Contingencies related to environmental change may be imposed by changing the tasks or the relationships among relevant variables in the task. Similarly, interdependencies among tasks can also be altered. Researchers will have to develop appropriate measures of routine diversity and background knowledge based on the specific kind of decision-making tasks being considered, and qualitative coding based on questionnaires and researcher observations are likely to be the most useful. 实验研究特别适合直接检验我们的假设。这类研究将采用纵向方法,在处理组和对照组中重复执行决策任务。一个可能的实验序列包括:初始阶段仅由人类决策者进行决策,随后在部分组中切换为机器学习(ML)决策(其中ML模型将根据初始人类决策阶段收集的数据进行训练)。测量常规多样性、背景知识和绩效的变化,将为评估我们的假设提供必要信息。与环境变化相关的应急情况,可通过改变任务或任务中相关变量之间的关系来施加。同样,任务之间的相互依赖关系也可以改变。研究人员必须根据所考虑的特定决策任务类型,制定适当的常规多样性和背景知识测量方法,而基于问卷和研究人员观察的定性编码可能是最有用的。


Besides experiments, longitudinal comparisons of routine diversity, knowledge richness, and performance before and after organizations have substituted human decision-making with ML offer another way to test our propositions. However, since the decision to use ML may depend on factors that are unobservable to researchers, a more feasible test may be to compare changes in these variables across organizations after unexpected environmental changes and their relationship with ML use. Our theory predicts that, controlling for other factors, the extent of ML substitution and the magnitude of environmental change will influence changes in these variables. In addition, case studies and analyses of organizations that rely on ML can also be helpful. For example, research can study hedge funds, where ML is being used to devise trading strategies (Hendershott, Jones, & Menkveld, 2011), to examine the relationships between using ML and the diversity of trading strategies and performance. Finally, researchers can also indirectly assess our propositions by examining if the adoption of ML is slower in contexts that need greater routine diversity and background knowledge. 除了实验之外,对组织用机器学习替代人类决策前后的常规多样性、知识丰富度和绩效进行纵向比较,是检验我们假设的另一种方法。然而,由于使用机器学习的决策可能取决于研究人员无法观察到的因素,一个更可行的检验方法可能是比较意外环境变化后不同组织中这些变量的变化,以及它们与机器学习使用的关系。我们的理论预测,在控制其他因素的情况下,机器学习替代的程度和环境变化的幅度将影响这些变量的变化。此外,对依赖机器学习的组织进行案例研究和分析也可能有所帮助。例如,研究人员可以研究使用机器学习制定交易策略的对冲基金(Hendershott, Jones, & Menkveld, 2011),以考察使用机器学习与交易策略多样性和绩效之间的关系。最后,研究人员还可以通过考察在需要更高常规多样性和背景知识的情境中,机器学习的采用是否更缓慢,来间接评估我们的假设。

Extending Research on Machine Learning in Organizations

组织中的机器学习研究拓展

Coconstitutive interactions between humans and machine learning. An important feature of organizations is the coconstitution of human learning and technological artifacts (Introna, 2014; Orlikowski, 2007; Riemer & Johnston, 2017). In our context, organizations (and humans therein) adapt their behavior to ML, while simultaneously adapting ML to their needs, thus creating artifacts that are unique in the context of their “local worlds” (Riemer & Johnston, 2017). Given the scope of our study, we have not delved into the nature and implications of such coconstitution for OL in detail. Below, we offer some ideas on how future work can examine adaptations to such “local worlds,” and their implications for OL. 人类与机器学习之间的共同构成性互动。组织的一个重要特征是人类学习与技术人工制品的共同构成(Introna,2014;Orlikowski,2007;Riemer & Johnston,2017)。在我们的研究情境中,组织(以及其中的人类)会调整自身行为以适应机器学习,同时也会调整机器学习以满足自身需求,从而创造出在其“本地世界”情境中独一无二的人工制品(Riemer & Johnston,2017)。鉴于本研究的范围,我们尚未详细探讨这种共同构成性对组织学习(OL)的本质和影响。下文,我们将提出一些思路,说明未来的研究如何考察对这类“本地世界”的适应,以及这些适应对组织学习的影响。

By highlighting how and where ML may risk OL, our theorization has implications for an important adaptation to ML—instituting appropriate mechanisms for governance of ML to mitigate the risks to 通过强调机器学习(ML)可能在哪些方面和场景下对开放创新(OL)构成风险,我们的理论化研究对机器学习的一项重要适应性调整具有启示意义——建立适当的机器学习治理机制,以减轻对

OL and ensure successful OL. Our theorizing suggests that the greater opacity and autonomy of ML (Murray et al., 2020) can increase the likelihood of it selecting variants that cause extreme failure. Thus, governance mechanisms that allow organizations to identify and prevent such failures will be important. More broadly, our theorizing suggests that governance mechanisms should focus on recognizing where the need for routine diversity and knowledge richness may be high, and building alternative inventories of routine diversity and background knowledge that may be lost when ML replaces human decision-making. Another aspect of governance during the ML training process relates to going beyond evaluating their predictive accuracy and understanding that diversity in organizational routines gives rise to statistical variations in the historical data. In addition, our theorization implies greater caution when organizations use ML in simpler routines (where ML may be beneficial) if such routines form the foundation for subsequent human learning, and it highlights the need to develop alternative opportunities for such learning. Some OL-related governance implications of substituting human decision-making with ML are summarized in Table 1. Future research can study changes in other organizational features related to governance, including organizational structure and top management team compositions, and investigate how they affect the impact of ML on OL. 在线学习(OL)并确保在线学习的成功。我们的理论表明,机器学习(ML)的更高不透明度和自主性(Murray等人,2020)可能会增加其选择导致极端失败的变体的可能性。因此,允许组织识别和预防此类失败的治理机制将很重要。更广泛地说,我们的理论表明,治理机制应侧重于识别常规多样性和知识丰富性需求可能较高的领域,并建立常规多样性和背景知识的替代清单,这些在机器学习取代人类决策时可能会丢失。机器学习训练过程中的治理的另一个方面涉及超越评估其预测准确性,并且要理解组织常规中的多样性会导致历史数据中的统计变化。此外,我们的理论暗示,当组织在更简单的常规中使用机器学习(其中机器学习可能有益)时,如果这些常规构成后续人类学习的基础,则需要更加谨慎,并且它强调了为这种学习开发替代机会的必要性。用机器学习替代人类决策的一些与在线学习相关的治理影响总结在表1中。未来的研究可以研究与治理相关的其他组织特征的变化,包括组织结构和高层管理团队组成,并调查它们如何影响机器学习对在线学习的影响。

Another important aspect of adaptation relates to the fact that a large part of human learning concerns aspects other than information processing. One such aspect is emotions (Dewey, 1922), which ML cannot easily process since it relies on impersonal data and formal rationality. Research can examine how the variation in emotional content across tasks relates to the need for routine diversity and knowledge richness, and how it may moderate the effect of ML on OL. Relatedly, research can examine how individuals respond to ML substituting for humans, including being superseded or monitored by ML, and how that affects learning. A related area is the role of human trust in ML. Smith (2019: xiv) highlighted a concern that humans may be “unduly impressed,” which may hurt OL by reducing an organization’s ability to evaluate and understand ML’s actions. Empirical examinations of this concern, including what factors drive human trust in ML, can be productive. 适应的另一个重要方面与以下事实有关:人类学习的很大一部分内容涉及信息处理之外的其他方面。其中一个方面是情绪(杜威,1922),而机器学习(ML)由于依赖非个人化数据和形式化理性,难以轻易处理情绪。研究可以考察不同任务中情绪内容的变化如何与对常规多样性和知识丰富性的需求相关,以及它如何调节机器学习对在线学习(OL)的影响。与此相关的是,研究可以考察个人对机器学习替代人类的反应,包括被机器学习取代或监控的情况,以及这对学习的影响。另一个相关领域是人类对机器学习的信任作用。史密斯(2019:xiv)强调了一个令人担忧的问题,即人类可能会“过度印象深刻”,这可能通过降低组织评估和理解机器学习行为的能力而损害在线学习。对这一担忧的实证研究(包括哪些因素驱动人类对机器学习的信任)可能会富有成效。

Organizational goals and machine learning. Our theorizing has largely focused on the informationprocessing and efficiency goals of organizations. However, while these are important, organizations may also have other goals. Organizations that have other objectives are likely to differ with regard to the relevance of ML to OL. Studies can explore how the nature of organizational goals influences the need for routine diversity and knowledge richness and, as a consequence, the impact of adopting ML. In this regard, creative organizations can offer insightful contexts, as ML can already substitute for humans in some aspects of art (Still & d’Inverno, 2019) and music (Fiebrink, Caramiaux, Dean, & McLean, 2016). Similarly, organizations formed for other purposes, such as exploration, may offer another interesting context. In these cases, with its ability to generate nonobvious patterns, ML can offer something unexpected (e.g., a new art form to explore). However, sustaining creativity and exploration may also need greater routine diversity and knowledge richness, which may be affected by ML. 组织目标与机器学习。我们的理论研究主要集中在组织的信息处理和效率目标上。然而,尽管这些目标很重要,组织也可能有其他目标。具有其他目标的组织在机器学习(ML)与组织学习(OL)相关性方面可能存在差异。研究可以探索组织目标的性质如何影响对常规多样性和知识丰富性的需求,以及由此带来的采用机器学习的影响。在这方面,富有创造力的组织可以提供富有洞察力的情境,因为机器学习已经可以在艺术(Still & d’Inverno,2019)和音乐(Fiebrink,Caramiaux,Dean,& McLean,2016)的某些方面替代人类。同样,为其他目的(如探索)而成立的组织可能提供另一个有趣的情境。在这些情况下,由于机器学习能够生成非显而易见的模式,它可以提供意想不到的东西(例如,一种新的艺术形式可探索)。然而,维持创造力和探索也可能需要更高的常规多样性和知识丰富性,而这可能会受到机器学习的影响。


TABLE 1 OL-Related Governance Implications of Substituting Human Decision-Making with ML
表1 用机器学习替代人工决策的与在线学习(OL)相关的治理影响

Extending Research on Organization Learning

组织学习研究的拓展

Knowledge depreciation in organizational routines. An important aspect of OL relates to the depreciation of knowledge (Argote & Epple, 1990). With human learning, knowledge depreciation can happen because individuals forget how to perform a task, or due to individual turnover in organizations. With no turnover and a near-infinite ability to “remember” decision models, ML has the potential to address such forgetting and depreciation. However, we have theorized that ML can induce depreciation of organizational knowledge in its own way. Hence, an interesting avenue for future research is theorizing about how ML influences organizational forgetting, and examining any contingencies therein. 组织惯例中的知识折旧。组织学习(OL)的一个重要方面涉及知识折旧(Argote & Epple,1990)。随着人类学习,知识折旧可能发生,因为个人忘记如何执行任务,或者由于组织中的人员流动。如果没有人员流动且具有近乎无限的“记忆”决策模型的能力,机器学习(ML)有潜力解决此类遗忘和折旧问题。然而,我们已从理论上提出,机器学习可能以其自身的方式导致组织知识的折旧。因此,未来研究的一个有趣方向是对机器学习如何影响组织遗忘进行理论化分析,并考察其中的任何权变因素。

Interorganizational spillovers of learning. An important boundary condition for our theorization is its focus on variation in routines within an organization. Future research can extend our study and examine how ML affects interorganizational spillovers of learning. This is particularly relevant as organizations also learn vicariously from their peers (Baum & Dahlin, 2007; Baum et al., 2000; Ingram & Baum, 1997). In this regard, it may be particularly interesting to study “cloud ML,” which uses historical data from multiple organizations to estimate decision models. This may accelerate the diffusion of high-performing routine variants, as cloud ML will select variants that perform well across many organizations. However, it may also exacerbate the concerns that we have theorized about if, driven by cloud ML, organizations adopt similar variants and interorganization diversity in routines decreases. Much as the lack of genetic diversity makes a species vulnerable to environmental changes (Cohn, 1986), research can assess whether ML increases organizational isomorphism (Hannan & Freeman, 1977) by reducing interorganizational diversity in routines and whether that affects the performance of organizational populations. A related area is how this impact on organizations differs based on resources. The rise of cloud ML and third-party service providers may enable the use of ML by organizations that would otherwise not have been able to do so due to the lack of resources. However, such organizations may also not be able to fully understand ML’s limitations and thus face increased risks when they substitute human decision-making with ML. 组织间学习溢出。我们理论化的一个重要边界条件是其聚焦于组织内部惯例的变异。未来的研究可以拓展我们的研究,并考察机器学习(ML)如何影响组织间的学习溢出。这一点尤为相关,因为组织也会从同行那里间接学习(Baum & Dahlin, 2007; Baum et al., 2000; Ingram & Baum, 1997)。在这方面,研究“云机器学习”可能会特别有趣,它利用来自多个组织的历史数据来估计决策模型。这可能会加速高性能惯例变体的传播,因为云机器学习会选择在众多组织中表现良好的变体。然而,如果在云机器学习的驱动下,组织采用了类似的变体,导致组织间惯例的多样性下降,这也可能加剧我们之前理论化的担忧。正如遗传多样性的缺乏会使物种容易受到环境变化的影响(Cohn, 1986),研究可以评估机器学习是否通过减少组织间惯例的多样性而增加组织同构性(Hannan & Freeman, 1977),以及这是否会影响组织种群的绩效。一个相关的领域是,这种对组织的影响如何因资源不同而有所差异。云机器学习和第三方服务提供商的兴起可能使那些原本因缺乏资源而无法使用机器学习的组织能够采用机器学习。然而,这类组织也可能无法充分理解机器学习的局限性,因此在用人机决策替代人工决策时会面临更高的风险。


Interindustry differences in organizational learning. Research has suggested that experience is more important to organizational performance in some industries than in others (Balasubramanian & Lieberman, 2010). Hence, research can examine whether the impact of ML on OL depends on the importance of learning from experience in an industry. More broadly, research can examine dimensions of the industry that influence the need for routine diversity and knowledge richness. Some dimensions include industry life cycle (e.g., routine diversity and knowledge richness may be less important later in the life cycle) and competitive intensity (e.g., routine diversity may be costlier in competitive industries, but may help respond to competition). 行业间组织学习的差异。研究表明,在某些行业中,经验对组织绩效的重要性高于其他行业(Balasubramanian & Lieberman,2010)。因此,研究可以考察机器学习(ML)对组织学习(OL)的影响是否取决于行业中从经验学习的重要性。更广泛地说,研究可以考察影响对常规多样性和知识丰富度需求的行业维度。一些维度包括行业生命周期(例如,在生命周期后期,常规多样性和知识丰富度的重要性可能较低)和竞争强度(例如,在竞争激烈的行业中,常规多样性可能成本更高,但可能有助于应对竞争)。

CONCLUSION

结论

With organizations increasingly using ML to conduct many intellectual tasks previously performed by humans, we are likely to see significant changes in how organizations learn. Recent studies have started laying the theoretical foundations for studying ML in organizations. However, we still lack understanding of how substituting human decisionmaking with ML may affect OL, including the underlying mechanisms through which ML may affect OL. In this study, we take a first step toward addressing these gaps. By laying out a systematic framework that highlights the risk of myopia due to ML along with some associated contingencies, we hope that our article motivates and facilitates a deeper conversation about the risks and benefits of ML, and the roles of humans therein. 随着组织越来越多地使用机器学习来执行许多以前由人类完成的智力任务,我们可能会看到组织学习方式的重大变化。最近的研究已经开始为研究组织中的机器学习奠定理论基础。然而,我们仍然缺乏对用机器学习替代人类决策可能如何影响组织学习(OL)的理解,包括机器学习可能影响组织学习的潜在机制。在本研究中,我们迈出了填补这些空白的第一步。通过构建一个系统的框架,强调机器学习带来的近视风险以及一些相关的权变因素,我们希望我们的文章能够激发并促进关于机器学习的风险与收益以及人类在其中角色的更深入讨论。

REFERENCES

参考文献

Ahuja, G., & Lampert, M. C. 2001. Entrepreneurship in the large corporation: A longitudinal study of how established firms create breakthrough inventions. Strategic Management Journal, 22: 521543.
Ahuja, G., & Lampert, M. C. 2001. 大公司中的创业精神:成熟企业创造突破性发明的纵向研究。《战略管理杂志》,22: 521543。

Anand, J., Mulotte, L., & Ren, C. R. 2016. Does experience imply learning? Strategic Management Journal, 37: 13951412.
Anand, J., Mulotte, L., & Ren, C. R. 2016. 经验是否意味着学习?《战略管理杂志》,37: 13951412.

Argote, L. 2011. Organizational learning research: Past, present and future. Management Learning, 42: 439446.
Argote, L. 2011. 组织学习研究:过去、现在与未来。《管理学习》,42: 439-446.

Argote, L., & Epple, D. 1990. Learning curves in manufacturing. Science, 247: 920924.
Argote, L.,& Epple, D. 1990. 制造业中的学习曲线。《科学》,247:920-924。

Argote, L., & Miron-Spektor, E. 2011. Organizational learning: From experience to knowledge. Organization Science, 22: 11231137.
阿戈特(Argote, L.)和米隆-斯佩克托(Miron-Spektor, E.). 2011. 组织学习:从经验到知识. 《组织科学》(Organization Science), 22: 1123-1137.

Ash, T. G. 2016. Free speech: Ten principles for a connected world. New Haven, CT: Yale University Press.
阿什,T. G. 2016. 言论自由:互联世界的十项原则。康涅狄格州纽黑文:耶鲁大学出版社。

Balasubramanian, N., & Lieberman, M. B. 2010. Industry learning environments and the heterogeneity of firm performance. Strategic Management Journal, 31: 390412.
巴拉苏布拉马尼亚姆,N.,& 利伯曼,M. B. 2010. 行业学习环境与企业绩效的异质性。《战略管理杂志》,31: 390412.

Baum, J. A., & Dahlin, K. B. 2007. Aspiration performance and railroads’ patterns of learning from train wrecks and crashes. Organization Science, 18: 368385.
鲍姆,J. A.,& 达林,K. B. 2007. 抱负绩效与铁路公司从火车事故中学习的模式。《组织科学》,18:368-385。

Baum, J. A., Li, S. X., & Usher, J. M. 2000. Making the next move: How experiential and vicarious learning shape the locations of chains’ acquisitions. Administrative Science Quarterly, 45: 766801.
鲍姆,J. A.,李,S. X.,& 厄舍,J. M. 2000. 做出下一步行动:经验学习和替代学习如何影响连锁企业的收购地点。《行政科学季刊》,45:766801.

Becker, M. C. 2004. Organizational routines: A review of the literature. Industrial and Corporate Change, 13: 643677.
Becker, M. C. 2004. 组织惯例:文献综述。《产业与公司变革》,13:643-677。

Broussard, M. 2018. Artificial unintelligence. Cambridge, MA: MIT Press.
布鲁萨德,M. 2018. 人工非智能。剑桥,马萨诸塞州:麻省理工学院出版社。

Brown, J. S., & Duguid, P. 1991. Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science, 2: 4057.
Brown, J. S., & Duguid, P. 1991. 组织学习与实践社区:迈向对工作、学习和创新的统一视角。《组织科学》,2:4057。

Chohury, P., Starr, E., & Agaral, R. 200. Machine learing and human capital complementarities: Experimental evidence on bias mitigation. Strategic Management Journal, 41: 13811411.
Chohury, P., Starr, E., & Agaral, R. 2020. 机器学习与人力资本互补性:缓解偏差的实验证据。《战略管理杂志》,41: 13811411.

Cohen, M. D., & Bacdayan, P. 1994. Organizational routines are stored as procedural memory: Evidence from a laboratory study. Organization Science, 5: 554568.
科恩(Cohen, M. D.)、巴克达扬(Bacdayan, P.). 1994. 组织惯例作为程序性记忆存储:来自实验室研究的证据. 《组织科学》, 5: 554-568.

Cohn, J. P. 1986. Surprising cheetah genetics. Bioscience, 36: 358362.
科恩,J. P. 1986. 令人惊讶的猎豹遗传学。《生物科学》,36:358-362。

Cowgill, B., & Tucker, C. E. 2019. Economics, fairness and algorithmic bias. Working paper. http://conference. nber.org/confer/2019/YSAIf19/SSRN-id3361280.pdf
Cowgill, B., & Tucker, C. E. 2019. 经济学、公平性与算法偏见。工作论文。http://conference.nber.org/confer/2019/YSAIf19/SSRN-id3361280.pdf

Cui, G., Wong, M. L., & Lui, H. K. 2006. Machine learning for direct marketing response models: Bayesian networks with evolutionary programming. Management Science, 52: 597612.
Cui, G., Wong, M. L., & Lui, H. K. 2006. 直接营销响应模型的机器学习:带进化规划的贝叶斯网络。《管理科学》,52:597-612。

Cyert, R. M., & March, J. G. 1963. A behavioral theory of the firm. Englewood Cliffs, NJ: Prentice-Hall.
Cyert, R. M., & March, J. G. 1963. 企业行为理论。新泽西州恩格尔伍德悬崖:普伦蒂斯-霍尔出版社。

Denrell, J. 2003. Vicarious learning, undersampling of failure, and the myths of management. Organization Science, 14: 227243.
Denrell, J. 2003. 替代学习、失败的采样不足以及管理的神话。《组织科学》,14:227243。

Denrell, J., Fang, C., & Liu, C. 2019. In search of behavioral opportunities from misattributions of luck. Academy of Management Review, 44: 896915.
Denrell, J., Fang, C., & Liu, C. 2019. 探寻运气误归因带来的行为机遇。《管理学会评论》,44: 896915.

Denrell, J., & Le Mens, G. 2020. Revisiting the competency trap. Industrial and Corporate Change, 29: 183205.
Denrell, J.,& Le Mens, G. 2020. 重探能力陷阱。《产业与公司变革》,29:183205。

Dewey, J. 1922. Human nature and conduct. New York, NY: Henry Holt and Company.
杜威,J. 1922. 人性与行为。纽约,纽约州:亨利·霍尔特公司。

Dubey, R., Agrawal, P., Pathak, D., Griffiths, T. L., & Efros, A. A. 2018. Investigating human priors for playing video games. Doi: 10.48550/arXiv.1802.10217
杜贝(Dubey, R.)、阿格拉瓦尔(Agrawal, P.)、帕塔克(Pathak, D.)、格里菲斯(Griffiths, T. L.)和埃弗罗斯(Efros, A. A.). 2018. 探究人类玩电子游戏的先验知识. 数字对象标识符:10.48550/arXiv.1802.10217

Feldman, M. S., & March, J. G. 1981. Information in organizations as signal and symbol. Administrative Science Quarterly, 26: 171186.
Feldman, M. S., & March, J. G. 1981. 组织中的信息作为信号和符号。《行政科学季刊》,26: 171-186.

Feldman, M. S., & Pentland, B. T. 2003. Reconceptualizing organizational routines as a source of flexibility and change. Administrative Science Quarterly, 48: 94118.
Feldman, M. S., & Pentland, B. T. 2003. Reconceptualizing organizational routines as a source of flexibility and change. Administrative Science Quarterly, 48: 94118.

Fiebrink, R., Caramiaux, B., Dean, R., & McLean, A. 2016. The machine learning algorithm as creative musical tool. Oxford, U.K.: Oxford University Press.
Fiebrink, R., Caramiaux, B., Dean, R., & McLean, A. 2016. The machine learning algorithm as creative musical tool. Oxford, U.K.: Oxford University Press.

Fiol, C. M., & Lyles, M. A. 1985. Organizational learning. Academy of Management Review, 10: 803813.
菲奥(Fiol, C. M.)和莱尔斯(Lyles, M. A.),1985年。组织学习。《管理学会评论》,10:803-813。

Galbraith, J. 1973. Designing complex organizations. Reading, MA: Addison-Wesley.
加尔布雷斯,J. 1973. 设计复杂组织。马萨诸塞州雷丁市:艾迪生-韦斯利出版公司。

Galbraith, J. R. 1974. Organization design: An information processing view. Interfaces, 4: 2836.
加尔布雷斯,J. R. 1974. 组织设计:一种信息处理视角。《Interfaces》,4:2836。

Glaser, V. L. 2017. Design performances: How organizations inscribe artifacts to change routines. Academy of Management Journal, 60: 21262154.
格拉泽,V. L. 2017. 设计绩效:组织如何铭刻人工制品以改变惯例。《管理学会期刊》,60:21262154。

Hannan, M. T., & Freeman, J. 1977. The population ecology of organizations. American Journal of Sociology, 82: 929964.
汉南,M. T.,& 弗里曼,J. 1977. 组织的种群生态学。《美国社会学期刊》,82:929-964.

Hendershott, T., Jones, C. M., & Menkveld, A. J. 2011. Does algorithmic trading improve liquidity? Journal of Finance, 66: 133.
亨德肖特,T.,琼斯,C. M.,& 门克维尔德,A. J. 2011. 算法交易是否改善了流动性?《金融杂志》,66: 133.

Hildebrand, D. L. 2008. Dewey: A beginner’s guide. Oxford, U.K.: Oneworld Publications.
希尔德布兰德,D. L. 2008. 杜威:入门指南。英国牛津:Oneworld Publications。

Ingram, P., & Baum, J. A. 1997. Opportunity and constraint: Organizations’ learning from the operating and competitive experience of industries. Strategic Management Journal, 18: 7598.
英格拉姆(Ingram, P.)和鲍姆(Baum, J. A.),1997年。机会与约束:组织从行业的运营和竞争经验中学习。《战略管理杂志》,18:75-98。

Introna, L. D. 2014. Towards a post-human intra-actional account of sociomaterial agency (and morality). In P. Kroes & P. P. Verbeek (Eds.), The moral status of technical artefacts: 3154. Dordrecht, Netherlands: Springer.
Introna, L. D. 2014. Towards a post-human intra-actional account of sociomaterial agency (and morality). In P. Kroes & P. P. Verbeek (Eds.), The moral status of technical artefacts: 3154. Dordrecht, Netherlands: Springer.

Kalberg, S. 1980. Max Weber’s types of rationality: Cornerstones for the analysis of rationalization processes in history. American Journal of Sociology, 85: 11451179.
卡尔伯格,S. 1980. 马克斯·韦伯的理性类型:历史中合理化进程分析的基石。《美国社会学期刊》,85: 11451179。

Kerr, G. 2007. The development history and philosophical sources of Herbert Simon’s Administrative Behavior. Journal of Management History, 13: 255268.
Kerr, G. 2007. 赫伯特·西蒙《行政行为》的发展历程与哲学渊源。《管理史杂志》,13:255-268。

Kim, J. Y., Kim, J. Y., & Miner, A. S. 2009. Organizational learning from extreme performance experience: The impact of success and recovery experience. Organization Science, 20: 958978.
金,J. Y.,金,J. Y.,& 迈纳,A. S. 2009. 从极端绩效经验中学习组织学习:成功与恢复经验的影响。《组织科学》,20: 958-978.

Krueger, K. A., & Dayan, P. 2009. Flexible shaping: How learning in small steps helps. Cognition, 110: 380394.
克鲁格,K. A.,& 达扬,P. 2009. 灵活塑造:小步骤学习如何发挥作用。《认知》,110:380-394。

Lawrence, P. R., & Lorsch, J. W. 1967. Organization and environment. Boston, MA: Harvard Business School, Division of Research.
劳伦斯,P. R.,& 洛希,J. W. 1967. 组织与环境。马萨诸塞州波士顿:哈佛商学院研究部。

Levinthal, D. A., & March, J. G. 1993. The myopia of learning. Strategic Management Journal, 14: 95112.
莱文瑟尔,D. A.,& 马奇,J. G. 1993. 学习的近视。《战略管理杂志》,14:951-112。

Levitt, B. &rc, J. zal Annual Review of Sociology, 14: 319338.
Levitt, B. & rc, J. zal Annual Review of Sociology, 14: 319338.

Lindebaum, D., Vesa, M., & den Hond, F. 2020. Insights from “The Machine Stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review, 45: 247263.
林德鲍姆(Lindebaum, D.)、韦萨(Vesa, M.)及登洪德(den Hond, F.). 2020. 从《机器停止运转》中获得的见解:更好地理解算法决策中的理性假设及其对组织的影响. 《管理学会评论》, 45: 247263.

Mahoney, J. T. 1995. The management of resources and the resource of management. Journal of Business Research, 33: 91101.
Mahoney, J. T. 1995. 资源管理与管理资源. 《商业研究杂志》, 33: 91101.

March, J. G. 1991. Exploration and exploitation in organizational learning. Organization Science, 2: 7187.
March, J. G. 1991. 组织学习中的探索与利用。《组织科学》,2:7187。

March, J. G. & Simon, H. A. 1958. Organizations. New York, NY: Wiley.
March, J. G. & Simon, H. A. 1958. Organizations. New York, NY: Wiley.

Milliken, F. J. 1987. Three types of perceived uncertainty about the environment: State, effect, and response uncertainty. Academy of Management Review, 12: 133143.
米利肯,F. J. 1987. 关于环境的三种感知不确定性:状态、效应和响应不确定性。《管理学会评论》,12:133-143。

Mintzberg, H. 1987. Crafting strategy. Boston, MA: Harvard Business School Press.
明茨伯格,H. 1987. 制定战略。马萨诸塞州波士顿:哈佛商学院出版社。

Mintzberg, H. 1994. Rethinking strategic planning, part I: Pitfalls and fallacies. Long Range Planning, 27: 1221.
明茨伯格,H. 1994. 重新思考战略规划,第一部分:陷阱与谬误。《长期规划》,27: 1221。

Murray, A., Rhymer, J., & Sirmon, D. G. 2020. Humans and technology: Forms of conjoined agency in organizations. Academy of Management Review. doi: 10.5465/ amr.2019.0186
默里,A.,赖默,J.,& 西蒙,D. G. 2020. 人类与技术:组织中的联合代理形式。《管理学会评论》。doi: 10.5465/amr.2019.0186

Negnevitsky, M. 2011. Artificial intelligence: A guide to intelligent systems. Essex, U.K.: Pearson Education.
Negnevitsky, M. 2011. 人工智能:智能系统指南. 英国埃塞克斯:Pearson Education.

Nelson, R. R., & Winter, S. G. 1982. An evolutionary theory of economic change. Cambridge, MA: Harvard University Press.
纳尔逊,R. R.,& 温特,S. G. 1982. 经济变迁的演化理论。马萨诸塞州剑桥:哈佛大学出版社。

Orlikowski, W. J. 2007. Sociomaterial practices: Exploring technology at work. Organization Studies, 28: 14351448.
Orlikowski, W. J. 2007. 社会物质实践:探索工作中的技术。《组织研究》,28:14351448。

Pavitt, K. 2002. Innovating routines in the business firm: What corporate tasks should they be accomplishing? Industrial and Corporate Change, 11: 117133.
帕维特,K. 2002. 企业中的创新惯例:企业应该完成哪些任务?《产业与公司变革》,11: 117-133。

Pentland, B. T., & Feldman, M. S. 2005. Organizational routines as a unit of analysis. Industrial and Corporate Change, 14: 793815.
彭特兰,B. T.,& 费尔德曼,M. S. 2005. 组织惯例作为分析单位。《产业与公司变革》,14: 793815.

Pentland, B. T., & Hærem, T. 2015. Organizational routines as patterns of action: Implications for organizational behavior. Annual Review of Organizational Psychology and Organizational Behavior, 2: 465487.
彭特兰,B. T.,& 海尔姆,T. 2015. 组织惯例作为行动模式:对组织行为的启示。《组织心理学与组织行为年度评论》,2:465-487。

Polos, L., Adner, R., Ryall, M., & Sorenson, O. 2009. The case for formal theory. Academy of Management Review, 32: 2018.
Polos, L., Adner, R., Ryall, M., & Sorenson, O. 2009. The case for formal theory. Academy of Management Review, 32: 2018.

Rao, M., & McInerney, L. 2020. The A-levels fiasco. Guardian. Retrieved from https://www.theguardian. com/education/audio/2020/aug/19/the-a-levels-fias co-podcast.
饶(Rao), M.,& 麦金纳尼(McInerney), L. 2020. A-level考试风波。《卫报》。取自 https://www.theguardian.com/education/audio/2020/aug/19/the-a-levels-fiasco-podcast

Riemer, K., & Johnston, R. B. 2017. Clarifying ontological inseparability with Heidegger’s analysis of equipment. Management Information Systems Quarterly, 41: 10591082.
Riemer, K. 和 Johnston, R. B. 2017. 用海德格尔的用具分析澄清本体论不可分离性。《管理信息系统季刊》,41: 10591082.

Simon, H. A. 1947. Administrative behavior: A study of decision-making processes in administrative organization. New York, NY: Free Press.
西蒙,H. A. 1947. 行政行为:行政组织中的决策过程研究。纽约,纽约州:自由出版社。

Simon, H. A. 2000. Bounded rationality in social science: Today and tomorrow. Mind & Society, 1: 2539.
西蒙,H. A. 2000. 社会科学中的有限理性:今日与明日。《心灵与社会》,1: 2539。

Smith, B. C. 2019. The promise of artificial intelligence: Reckoning and judgment. Cambridge, MA: MIT Press.
Smith, B. C. 2019. 人工智能的前景:反思与评判。马萨诸塞州剑桥市:麻省理工学院出版社。

Still, A., & d’Inverno, M. 2019. Can machines be artists? A Deweyan response in theory and practice. Arts, 8: 36.
Still, A., & d’Inverno, M. 2019. Can machines be artists? A Deweyan response in theory and practice. Arts, 8: 36.

Tosi, H., Aldag, R., & Storey, R. 1973. On the measurement of the environment: An assessment of the Lawrence and Lorsch environmental uncertainty subscale. Administrative Science Quarterly, 18: 2736.
Tosi, H., Aldag, R., & Storey, R. 1973. 关于环境测量:对劳伦斯和洛尔施环境不确定性子量表的评估。《行政科学季刊》,18:2736。

Weiss, S. M., & Kulikowski, C. A. 1991. Computer systems that learn: Classification and prediction methods from statistics, neural nets, machine learning, and expert systems. San Mateo, CA: Morgan Kaufmann.
Weiss, S. M., & Kulikowski, C. A. 1991. 能够学习的计算机系统:来自统计学、神经网络、机器学习和专家系统的分类与预测方法。加利福尼亚州圣马特奥:Morgan Kaufmann。

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S. M., Richardson, R., Schultz, J., & Schwartz, O. 2018. AI now report 2018. New York, NY: AI Now Institute at New York University.
惠特克(Whittaker, M.)、克劳福德(Crawford, K.)、多贝(Dobbe, R.)、弗里德(Fried, G.)、卡齐纳斯(Kaziunas, E.)、马图尔(Mathur, V.)、韦斯特(West, S. M.)、理查森(Richardson, R.)、舒尔茨(Schultz, J.)和施瓦茨(Schwartz, O.)。2018年。《2018年人工智能现状报告》。纽约,纽约州:纽约大学人工智能研究所。

Yi, S., Knudsen, T., & Becker, M. C. 2016. Inertia in routines: A hidden source of organizational variation. Organization Science, 27: 782800.
易,S.,克努森,T.,& 贝克尔,M. C. 2016. 例行动作中的惯性:组织变异的一个隐藏来源。《组织科学》,27:782800。

Zollo, M., & Winter, S. G. 2002. Deliberate learning and the evolution of dynamic capabilities. Organization Science, 13: 339351. 佐洛(Zollo, M.)和温特(Winter, S. G.),2002年。刻意学习与动态能力的演化。《组织科学》(Organization Science),第13卷:339-351页。



X

X

Natarajan Balasubramanian (nabalasu@syr.edu) is a professor of management at the Whitman School of Management, Syracuse University. He received his PhD from the University of California, Los Angeles. His current research interests include learning, innovation, and technology adoption in organizations. Natarajan Balasubramanian (nabalasu@syr.edu) 是雪城大学惠特曼管理学院的管理学教授。他在加州大学洛杉矶分校获得博士学位。他目前的研究兴趣包括组织中的学习、创新和技术采用。

Yang Ye (yeyang@swufe.edu.cn) is an assistant professor of management at the Research Institute of Economics and Management, Southwestern University of Finance and Economics. She received her PhD from Syracuse University. Her research explores how firms learn and overcome disadvantages with a focus on the innovative activities undertaken by the firms. 杨晔(yeyang@swufe.edu.cn)是西南财经大学经济与管理研究院的管理学助理教授。她毕业于雪城大学,获得博士学位。她的研究探索企业如何学习并克服劣势,重点关注企业开展的创新活动。

Mingtao Xu (xumt@sem.tsinghua.edu.cn) is an assistant professor at the School of Economics and Management of Tsinghua University. He received his PhD in Strategic Management from Purdue University. His research interests include technological innovation, property rights, and AI and strategy. 徐明涛(xumt@sem.tsinghua.edu.cn)是清华大学经济管理学院的助理教授。他在普渡大学获得战略管理专业的博士学位。他的研究兴趣包括技术创新、产权以及人工智能与战略。

X X


Copyright of Academy of Management Review is the property of Academy of Management and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use. 《管理学会评论》的版权归管理学会所有,未经版权所有者明确许可,其内容不得复制、通过电子邮件发送至多个网站或发布到邮件列表。不过,用户可以为个人使用打印、下载或通过电子邮件发送文章。