AbstractThe ever-increasing investment gap for deteriorating infrastructure has necessitated the development of more effective asset management (AM) strategies. However, information asymmetry among AM stakeholder silos has been recognized as a key challenge in implementing effective AM strategies. The connectivity within the AM system introduces systemic risks (possibility of dependence-induced cascade failure) to the entire AM system operation when information asymmetry occurs. This study describes a toolbox to enable asset management stakeholders to assess such systemic risks through a network analytics approach. The network, representing the AM system, is examined through its centrality measures to identify the most critical subject areas within the AM system. These subject areas are subsequently paired with assets’ key performance indicators (KPIs). Within the developed toolbox, descriptive analytics provide transferrable KPI insights between stakeholders to reduce key asset information asymmetry. In parallel, predictive analytics forecast KPIs, ensuring stakeholder awareness of future asset performance to allow for appropriate preparation. Subsequently, prescriptive analytics employ heuristic-based optimization for optimal configuration of the AM network. The five tools presented are as follows: (1) dependence identification and network modeling; (2) network centrality analysis; (3) descriptive analytics of critical subject area paired KPI; (4) KPI-based predictive analytics; and (5) prescriptive analytics for optimal network configuration. The utility of the developed toolbox is demonstrated for Tools 1–3 using a real AM system network and KPIs associated with power transmission infrastructure outages. Based on the analyses, managerial insights are drawn to illustrate the usefulness of the developed approach in improving information asymmetry within the AM system, subsequently mitigating dependence-induced systemic risks.IntroductionInfrastructure assets in Canada and the United States continue to deteriorate each year, widening the gap in infrastructure spending needed to improve the asset conditions to serviceable levels (Infrastructure Canada 2018; McBride and Moss 2020). The American Society of Civil Engineers (ASCE 2021) report gave America’s infrastructure an overall grade of C–, up from a D+ in 2017. This modest improvement was partially attributed to asset owners adopting asset management (AM) techniques to prioritize spending constrained by the limited funding resources; however, the lingering low grade is attributed to massive maintenance backlogs, deteriorating infrastructure condition, and a lack of comprehensive asset inventory and consistent condition data (ASCE 2021). The Canadian Infrastructure Report Card (2019) states that most infrastructure used daily by Canadians is more than 20 years old and deteriorating rapidly. This report also noted that effective AM plan implementation and operationalization within asset-intensive organizations are critical to maximizing the impact of limited resources. In this respect, Uddin et al. (2013) concluded that infrastructure AM encompasses the systematic and coordinated planning and programming of investments, design, construction, maintenance, operation, and in-service evaluation of physical infrastructure and associated components. Additionally, Ross (2019) concluded that AM can be described as the collective term for the structured decision making and execution of plans to optimize the balance among infrastructure performance, efforts, and risk through the use of available assets as well as procurement of future assets.Asset Management System ModelThe Institute for Asset Management (IAM) developed the conceptual AM model in 2014 as a guide for AM professionals to implement and operate an AM approach in their organizations (IAM 2015). The IAM is the international professional body for AM professionals, and generates AM knowledge, best practice guidance, and awareness of the benefits of the AM discipline for individuals, organizations, and wider society (IAM 2021). The AM system model has also been referred to as the six box model, as there are six connected AM divisions (Strategy & Planning, Asset Management Decision-Making, Life Cycle Delivery, Asset Information, Organization & People, and Risk & Review), as shown in Fig. 1 (IAM 2015) and summarized as follows. Strategy & Planning aligns the organization’s AM activities to fit within a consistent plan that has been developed and approved by all stakeholders. Asset Management Decision-Making reviews the challenges and makes decisions regarding how each of these stages occurs within the main areas of an asset’s life: asset acquisition/creation; operation; maintenance; and end of life disposal, decommissioning, or renewal. Life Cycle Delivery involves the entire lifespan of the asset, from acquisition/creation, through operation/maintenance, and finally, end of life disposal, decommissioning, or renewal. Asset Information is typically input into an AM process, created or modified by a process, or the output of a process. Organization & People involves a review of the organizational structure, roles, responsibilities, and contractual relationships. Risk & Review identifies the risks related to an asset’s life cycle delivery, and understands and manages such risks; establishes a feedback mechanism within the organization to allow for input on AM objectives, strategy, and plan; and supports the continued improvement and development of AM activities.Overall, there are 39 AM system subject areas, outlined by the Global Forum’s Asset Management Landscape (2014), across the six AM system divisions, as shown in Fig. 2. The 39 subject areas were designed to illustrate the breadth of activities within the scope of AM, the interrelationships between activities and the need to integrate them, and the critical role for AM to align with and deliver the strategic plan goals of an asset-intensive organization (IAM 2015). The connectivity among the AM subject areas within the AM system introduces systemic risks, such as the possibility of dependence-induced cascade failure. This might occur when functionality failure(s) through either one or more AM subject areas, nodes, or information flow links cascade throughout the remaining functional AM subject areas, thus hindering relevant decision-making abilities.BackgroundKey to effective AM is the collective and coordinated effort that involves the collaboration of multiple stakeholders (e.g., engineering, operations, AM, finance, project management, and owner). These AM stakeholders form silos in the absence of necessary collaboration. These stakeholder silos have been shown to be the major hurdle in the implementation and operation of an effective AM system within an asset-intensive organization (Pell et al. 2015; de la Pena et al. 2016; Golightly et al. 2018). AM implementation and operation failures typically occur when stakeholders experience information asymmetry because of inadequate information-sharing protocols and/or failure to readily share key information that could mutually benefit their infrastructure’s AM system (Brunetto et al. 2014; Xerri et al. 2015; Golightly et al. 2018).Information asymmetry occurs when one party in a relationship has more or better quality real-time or historical information than another (Bergh et al. 2019). Such information asymmetry creates systemic (i.e., dependence-induced) risks within the AM system. In such a scenario, key stakeholders do not receive vital information and cannot respond in a timely manner to the impact of different stakeholders’ decisions within the implementation and operation of their AM system. In other words, they are precluded from using real-time or historical information to support their decision making (Bergh et al. 2019). This, in turn, causes isolation of an AM subject area node or breaks some of its information flow links to other nodes, potentially inducing a cascade failure throughout the AM system network. An example of systemic risk in a different industry occurred in international banking in 2008, when the failure of Lehman Brothers caused the collapse of the global banking sector—and, without government bailouts to other major banking institutions, the collapse cascade would have been far greater (López-Espinosa et al. 2015; Miller 2017). In this example, there was information asymmetry in the mortgage-backed securities where risky mortgages were packaged as high-quality debt, leading to the seller having better information than the buyer (Tarver 2020). In this study through a network analytics lens, the systemic risks involved with implementing and operating an AM system pertain to information asymmetry due to either (1) node failure(s), which represent a specific AM subject area losing functionality and thus its ability to contribute to the overall (AM system) network, or (2) link failure(s), which represent an interruption in the information flow between AM subject area nodes.It is also important to understand that organization structure can be either decentralized or centralized. In a decentralized structure, the organization is divided into smaller teams in charge of specific aspects of the organization and decision making occurs at various levels within the organization (Graybeal et al. 2018). In a centralized structure, one or a select few individuals make the important decisions (e.g., resource allocation) and provide strategic direction for the organization (Graybeal et al. 2018). A decentralized structure offers many benefits for quick decision and response time and skilled and specialized management. Johnson & Johnson, for example, has successfully adopted this management structure across its over 200 operating companies (Weldon 2008; Mohamad et al. 2017). However, drawbacks of a decentralized structure include coordination issues between teams working toward a company’s strategic goal, and individual teams prioritizing their own goals over the organization’s goals (i.e., teams operating as silos) (Vantrappen and Wirtz 2017). The benefits of a centralized organization structure include clarity in decision making, streamlined implementation of policies and initiatives, and control over the strategic direction of the organization (e.g., Apple) (Graybeal et al. 2018). Drawbacks of the centralized system arise when employees have difficulty providing feedback on operations, and lower management levels have limited flexibility to influence changes (Vantrappen and Wirtz 2017). Decentralized decision making in infrastructure restoration was shown to be an effective approach by Crowther (2008) as well as Talebiyan and Dueñas-Osorio (2020). The AM system is typically decentralized with AM stakeholders in charge of and making decisions pertaining to specific AM subject areas (Golightly et al. 2018). Thus, the current study views and analyzes the AM stakeholders as a decentralized system, while still proposing a centralized information database solution that addresses the main challenge of information asymmetry between AM stakeholders and ensures that AM stakeholders are not overwhelmed with too much information.A decentralized system can be represented as a network consisting of connected nodes and links, forming a web of connected components (Barabási 2016). The nodes simulate the components of a system, whereas the links represent the dependencies among these nodes. Networks are often analyzed using specific measures related to either system components (i.e., node-based or link-based) or the entire connected system (i.e., network-based). Node-based measures focus on centrality analysis as it relates to the node’s importance in the network by assessing the connectedness of that node to other network nodes. There are different centrality measures applied in a wide variety of applications (Derrible 2012; Lee et al. 2013; Estrada and Knight 2015; Das et al. 2018; Ezzeldin and El-Dakhakhni 2019; Goforth et al. 2020).Analytics facilitate the realization of business objectives through reporting of data to analyze trends (i.e., descriptive analytics), creating prediction models for forecasting (i.e., predictive analytics), and optimizing processes to enhance performance (i.e., prescriptive analytics) (Tsai et al. 2015; Delen and Ram 2018). Analytics have been applied in various studies within multiple infrastructure industries to improve AM processes. Descriptive analytics applications focus on deriving insights into performance trends from complex data, mainly through visualizations (Abdelfatah et al. 2013; Barker et al. 2017; Black et al. 2018; Mukherjee et al. 2018). Predictive analytics applications use historical data within machine learning models to predict an output (e.g., health index, condition, outage severity, or asset remaining life) (Zhou et al. 2016; Dehghanian et al. 2019; Yang et al. 2019; Piryonesi and El-Diraby 2020). Prescriptive analytics applications attempt to optimize intervention and maintenance planning and scheduling (Qiu et al. 2013; Chen et al. 2015; Heng et al. 2016; Abu-Samra et al. 2020). Such applications have demonstrated the benefits of using analytics to improve the specific subsets of AM, but a disconnect remains between the use of analytics and the bigger picture that considers systemic risks within the infrastructure AM system.In addition, although studies have shown that analytics provide a competitive edge when integrated into business processes (Delen et al. 2018; Scheibe et al. 2019; O’Neill and Brabazon 2019; Hassan 2019), it is critical to first identify the key hubs within an organizational structure through which information flows, for the organization to operate effectively (McDowell et al. 2016). This concept has been studied in organizational networks, identifying key stakeholders (e.g., companies, people, or departments) that are critical to the functionality and effective operation of the organization (Barão et al. 2017; Ujwary-Gil 2019; Eisenberg et al. 2020). Nonetheless, to the best of the authors’ knowledge, the identification of AM systemic risks with a method to reduce the information asymmetry within the AM system subject areas (i.e., hubs) is yet to be developed. As such, in this study, a toolbox is created to integrate network analysis and data analytics for an AM system model that incorporates the decentralized nature of the AM stakeholders and subject areas, and presents a centralized database whereby AM subject area-specific information can be accessed by the stakeholders responsible for those subject areas. This approach will ensure consistency across all AM subject areas while also preventing AM stakeholders from being overloaded with information, allowing them to focus on only the information necessary to make decisions within the AM subject areas they are responsible for (Herrera et al. 2011; Prajogo et al. 2018).This paper first outlines the study goals and objectives, followed by a description of the considered network measures. Subsequently, the developed toolbox is presented to describe five distinct tools that identify critical AM system subject areas using network analysis and that uses analytics with infrastructure asset key performance indicators (KPIs) to reduce the systemic risks caused by information asymmetry between dependent AM subject areas. In the current study, due to data restrictions, the utility of the toolbox is demonstrated using only the first three of five tools considering an AM conceptual model developed by IAM and power transmission infrastructure outage KPIs. Finally, managerial insights are drawn to illustrate how asset managers can reduce the systemic risks within an AM system.Study Goal and ObjectivesThe study goal is to mitigate systemic risks within an AM system that are created by information asymmetry among dependent AM subject areas. This study attempts to break down the silos within which infrastructure stakeholders operate, so all stakeholders use the same information to make decisions, ensuring cohesiveness among stakeholders working toward their AM goals and objectives with a set of described tools. The described tools will enable AM stakeholders to identify the network structure of an AM system by modeling the complex connections within such a system. In addition, the described tools will allow AM stakeholders to analyze the resulting network to identify dependence-induced systemic risks to implementing and operating an effective AM system, achieved by identifying the critical subject areas within the network using network measures (e.g., node- and link-based centralities) specific to the AM system network structure. Finally, the tools will employ descriptive, predictive, and prescriptive analytics using infrastructure KPIs (e.g., average outage duration, bridge condition index, and the number of water main failures per 1,000 km) and the AM network structure specific to the organization owning, managing, and operating the infrastructure assets to reduce the information asymmetry, with the aim of ensuring risk-informed and effective decision making. This concept, applied in a manufacturing operational performance study by Prajogo et al. (2018), illustrated that good information management practices within an organization can have a significant impact on overall business performance. The latter was accomplished through sharing information, using information technology tools within an organization, and sharing information with supply chain partners. As such, Prajogo et al. (2018) emphasized that organization management must look for ways to facilitate the sharing and centralized management of information across internal and external organizational boundaries.Network MeasuresComplex network theory allows for the modeling of complex system connections through a network of nodes and links (Boccaletti et al. 2006; Barabási 2016; Salama et al. 2020). This section provides a background of some relevant node- and network-based measures. Within the context of this study, nodes represent the main AM system subject areas and links represent the connections between the subject areas. The links within the network are directed, indicating information, documentation, knowledge, and/or policy being transferred from a source node to a target node, and unweighted, as each connection is viewed as equally important (unless otherwise specified) to the overall operation of the AM system. The level of connectedness within an AM system necessitates a network-based model to understand each node’s importance to the AM system. An adjacency matrix (A) can be formed that describes the connectivity and disconnection between the AM system network nodes. Each element of the adjacency matrix, Aij, is either 1, illustrating a direct connection between nodes i and j (i≠j), or 0 otherwise (Barabási 2016). Specific node-based centrality measures that relate to the AM network model are summarized below.Betweenness centrality identifies nodes that play a central role in connecting to other nodes in the network (Freeman 1977). The betweenness centrality of node i (BCi) measures the total number of shortest paths passing through node i as expressed in Eq. (1) (1) BCi=∑j≠i≠kρjk(i)ρjkwhere ρjk = number of shortest paths connecting node j to node k; and ρjk(i) = number of shortest paths connecting node j to node k that traverse node i in the network.Closeness centrality represents how close a node is to all other network nodes (Estrada and Knight 2015). The closeness centrality of node i (CCi) is determined by finding the shortest path using either weighted or unweighted links for node i as (2) where d(i,j) = shortest path distances between nodes i and j; and N = total number of network nodes.Degree centrality assesses the relative influence of nodes as the number of degrees (links) that a node directly shares with other nodes (Estrada and Knight 2015). As such, the degree centrality of a node i (DCi) is defined using the adjacency matrix A=(ai,j) as (3) This centrality measures the direct influence of a node on its connected nodes.Eigenvector centrality quantifies the extent of node connectedness to other important (i.e., high degree centrality) nodes (Thai and Pardalos 2012). The relative centrality score of node i (xi) for adjacency matrix A=(ai,j) is (4) where λ = a constant eigenvalue, and the equation could be rearranged in vector notation as the eigenvector equation Ax=λx. This centrality indicates node importance as its connection to other important nodes and nonconnection to unimportant nodes in the network.In addition to node failures, links might also fail, representing information asymmetry between nodes. Therefore, it is important to understand the importance of each link to the functionality of the network. A centrality metric related to the importance of information flow in a network is the link betweenness centrality (LBC) (Teixeira et al. 2016), defined as the number of shortest paths that traverse the link (Freeman 1977). In practical terms, the LBC is a measure of how central a link is to the network, and in the case of the AM system network, it measures the criticality of a specific link to information asymmetry. High-ranking links contribute systemic risks to the network, as their failure would lead to a cascading failure throughout the network. Although the links within this study are specified as unweighted, future extensions of this toolbox might incorporate link types and weights that relate to the type or criticality of information that is passed between nodes (e.g., raw, preprocessed, figures, decisions, or connections to AM standards).In addition to the aforementioned node- and link-based centrality measures, there are network-based measures that quantify the connectedness of the overall network structure (Estrada and Knight 2015; Opdyke et al. 2017; Valentin et al. 2018). The most relevant to the AM network are described below.Network density (ND) represents the ratio of actual links within a network to the potential links that could be formed within the network if the network were fully connected (Barabási 2016). It can be calculated using the following equation for a directed network: (5) where l = number of links in the network; and n = number of nodes in the network. The network density is a measure of the network’s health and effectiveness. The ratio has values ranging from 0 (a completely unconnected network) to 1 (a fully connected network). In AM applications, it can assess the level of connectedness between all subject areas in sharing information and indicate the susceptibility to network failure (for low values).Average degree centrality (ADC) is the ratio of the summation of the degree centrality values for all nodes to the total number of nodes in the network (Barabási 2016). It can be calculated using the following equation for all i nodes: (6) where DCi = degree centrality for node i; and n = total number of nodes in the network. This measure indicates how quickly disruptions can diffuse throughout the network. Within the context of AM, this measure refers to network dependence and highlights the systemic risk from the cascading effects of failed AM system subject areas.Network Analytics ToolboxTo address the study goal and objectives, the following five tools were developed, as shown in Fig. 3: (1) dependence identification and network modeling, where the AM network structure is identified and modeled; (2) network centrality analysis, to identify the critical AM subject areas causing systemic risk; (3) descriptive analytics of critical subject area paired KPI, to develop targeted visualizations that focus on only the necessary information for decision making relevant to specific AM subject areas; (4) KPI-based predictive analytics, to forecast KPI metrics to enable more proactive decision making; and (5) prescriptive analytics for optimal network configuration, to minimize AM network systemic risks. Each of these tools will be described in detail later in the paper to explain how each tackles the study goal and objectives.Tool 1: Dependence Identification and Network ModelingTool 1 describes the process for identifying the network structure of an AM system. The first step in implementing Tool 1 involves modeling an organization’s AM system in terms of its specific subject areas that describe the implementation and operation of its AM system as a network. In this respect, the connections between subject areas are identified based on expert AM knowledge. The links between nodes represent a decision, information or data transfer, strategy, or policy that is passed from one node (source node) to another (target node). It is these links that define the dependence between subject areas to form the AM system network. These links are indicated in the adjacency matrix, as illustrated in Fig. 4, where a link presence is indicated by a 1 and its absence is indicated by a 0. As mentioned earlier, the links do not have an associated weight value, as the link represents the presence of a connection in the form of a decision, information or data transfer, strategy, or policy that is passed from one source node to another (target one). The adjacency matrix can then be used to visualize the network, as illustrated in Fig. 4. For example, Fig. 4 shows 10 AM subject areas in the adjacency matrix and illustrates a potential AM network model with nodes representing subject areas and links representing the connections between subject areas.Tool 2: Network Centrality AnalysisTool 2 describes the process for analyzing the resulting network to identify dependence-induced systemic risks to implementing and operating an effective AM system. The AM network layout, as generated using Tool 1, is employed by Tool 2 to calculate the centrality measures—identifying the importance of each AM subject area within an AM system. Such centralities, illustrated in Fig. 5, highlight potential node/link systemic risk in the AM system where the centralities are converted to a ranked list of node/link importance. In this respect, betweenness centrality is a measure of the importance of the subject area to the overall implementation of the AM system within an organization. Closeness centrality is a measure of indirect AM information flow between the not-directly connected nodes. Degree centrality is a measure of the criticality of AM subject areas to the dependent subject areas. Eigenvector centrality is a measure of node connectedness and importance to other highly connected nodes, identifying subject areas that have a strong influence on other important AM subject areas. An organization would need to determine the centrality measure that is most relevant to its implementation strategy. For example, an organization conducting a preliminary screening of its AM structure would utilize eigenvector centrality to determine subject areas that influence other highly influential subject areas, allowing the organization to focus their attention on a few subject areas to maximize the impact of improvement in their AM system. When the calculated node- or link-based centrality measures are high, there is a greater likelihood of the AM network failing if such an important subject area or link were to become dysfunctional. Therefore, the importance ranking identifies the most critical subject areas and links exposed to the dependence-induced systemic risk involved with the implementation and operation of an AM system.Tool 3: Descriptive Analytics of Critical Subject Area-Paired KPITool 3 describes the process for implementing descriptive analytics for infrastructure asset KPIs. The subject areas most exposed to systemic risk, as identified from the centrality measures from Tool 2, can be further analyzed using Tool 3 when paired with infrastructure-industry-specific KPIs. Tool 3 visualizations are designed to draw their information from a centralized database and only display KPI information directly related to a AM subject area. Tool 3 visualizations are designed to ensure that stakeholders focus only on the information necessary to make decisions within that AM subject area rather than becoming overwhelmed by all AM information from the centralized database. For example, the subject area, outage management, is paired with the KPI avg. outage duration year-over-year and the trend in the avg. outage duration, as shown in Fig. 6. Fig. 6 presents an example of two KPIs and their evolving values with time. It should be noted that only the KPI values are expected to continue to change with time (i.e., dynamic) as new information becomes available, whereas their pairing to the systemic risk-critical AM subject areas is expected to remain largely the same (i.e., static) given that the AM system network is not expected to change with time.Infrastructure KPIs are metrics of a specified asset or overall infrastructure network performance. KPIs can be continuous or discrete, and can also be qualitative (e.g., low, medium, high) or quantitative (e.g., 50%–70%) in nature. Although the KPIs are paired to the AM subject areas, making them static in terms of their evaluation approach, their values are nonetheless expected to be dynamic as they continuously change with time under different conditions (e.g., climate). For example, the pavement industry uses the pavement condition index and international roughness index; the bridge industry uses the bridge condition index; the power industry uses the system average interruption frequency index, system average interruption duration index, and mean outage duration; and the water and wastewater industry uses the number of breaks per year, the number of failures per 1,000 km, and leakage of water per year (Alzoor et al. 2021; Uddin et al. 2013). Because the KPIs differ between infrastructure industries, Tool 3 pairs the critically dependent subject areas of the AM system with the relevant KPIs. Descriptive analytics can then be used to illustrate these paired KPIs, as shown in Fig. 6. Descriptive analytics often includes building a KPI-tailored dashboard that allows user interaction to gain useful KPI insights (Wexler et al. 2017). Tool 3 facilitates clear dashboards to be circulated among stakeholders to ensure that every stakeholder is informed on the KPIs related to the important AM subject areas. Huang et al. (2019) showed that, when stakeholders can see the impact of their work, they are more likely to develop trust in the management processes and therefore share information internally more readily. Therefore, Tool 3 facilitates clear visualizations to be circulated among stakeholders that manage the key AM subject areas, ensuring that these stakeholders see only the necessary information related to the decisions they need to make within the AM subject area they are responsible for. Tool 3 applications, in turn, allow for AM stakeholders to monitor their impact on the AM system for their specific AM subject areas, and ensure that all decisions specific to each AM subject area are made using consistent information.Tool 4: KPI-Based Predictive AnalyticsTool 4 describes the process for implementing predictive analytics for infrastructure KPIs. Building on the descriptive analytics of Tool 3, Tool 4 is focused on developing a predictive analytics model for the aforementioned KPIs. Fig. 7 outlines the process for this tool by including historical KPI performance within a machine learning model to output forecasted KPI metrics. Machine learning models are typically classified as either supervised (i.e., developing a mathematical function that maps the relationship between specific input–output pairs) or unsupervised (i.e., categorizing the dataset based on similarity, without pre-specifying outputs) (Zumel and Mount 2020). Examples of machine learning models include decision trees, artificial neural networks, and support vector machines (Aggarwal 2015).Input data are employed to train the machine learning model to predict a numerical or categorical output based on the provided contributing features (Hastie et al. 2009). This allows a decision maker to predict a KPI output value based on contributing features (e.g., climatic conditions, economic conditions, geographic location, asset characteristics, time, and maintenance history) and past historical KPI values (Haggag et al. 2021). Any additional input features would be infrastructure-industry-specific, therefore the organization would need to establish which features would be accessible before building a KPI predictive analytics model. Other research studies have successfully predicted specific infrastructure KPIs within different industries in isolation (Zhou et al. 2016; Dehghanian et al. 2019; Yang et al. 2019; Piryonesi and El-Diraby 2020), therefore a summary of some key techniques deployed in those studies to meet the goals of Tool 4 is provided below.For example, Fig. 7 is presented as an illustration of this tool to show a forecasted 5-year period for the KPI, average outage duration, used by the power industry. The input features include contributions to the outage (e.g., outage cause, failure mode, and climatic information) and component or system characteristics (e.g., voltage, affected component/system, time of the outage, and geographic location). As would be expected, the outputs from Tool 4 applications are only good for AM decision making if the information used for inputs is of high quality and pertains to meaningful data. Koziel et al. (2021) investigated the impact of using faulty data in AM decision making and found that there were significant implications on optimal replacement schedules. Therefore, high-quality data must be gathered related to each AM subject area-paired KPI to facilitate reaching the most effective AM decisions. The two lines in Fig. 7 indicate different organizations and their forecasted KPIs. The KPI predictive analytics model would forecast future performance, allowing stakeholders to be informed and proactively plan for more effective AM. Tool 4 applications improve the information symmetry in that all stakeholders are aware of and strive toward a clear performance goal for their critical AM KPIs, and ensure consistency among stakeholders that manage the AM subject areas because they can make decisions based on consistent predictive models built specifically to each AM subject area-paired KPI.Tool 5: Prescriptive Analytics for Optimal Network ConfigurationTool 5 describes the process for implementing prescriptive analytics by optimizing the AM subject area network through adding links to the original network to minimize the average centrality measure of choice (e.g., betweenness, closeness, degree, or eigenvector) or multiple centralities, depending on the objective function (Thai and Pardalos 2012). The optimization problem would include a constraint on the number of allowable links to add to the network before it becomes too centralized or nonfunctional and there would be a cost per link addition in terms of new information, policy, or decision that would be transferred. The optimization would minimize the systemic risks in the AM network by reducing the impact of failure related to high centrality nodes or links through the addition or subtraction of links in the network, while still maintaining the functionality of the AM system network. Such optimization would employ, for example, genetic algorithms or other heuristics, where a population of solutions is generated and evolves until a (near) optimal solution is obtained (Goldberg 1989). Each solution within the population represents a single realization of the input features (i.e., individual). New individuals are reproduced through special evolutionary operators including (1) elitism, where individuals with greater fitness are replicated; (2) crossover, where sets of two individuals (i.e., parents) are selected based on predefined criteria (e.g., random selection or a selection based on the fitness value) and subsequently mixed to produce new individuals; and (3) mutation, where single parents are altered randomly to produce new individuals (Nearchou 2004; Scrucca 2013; Yosri et al. 2021). Within the AM system network optimization, each link would be represented as a feature within the individual and the values would be either 1, indicating link presence, or 0, indicating link absence. The application of Tool 5 would present an optimized configuration of the AM system such that the systemic risks would be minimized, as illustrated in the new network configuration shown in Fig. 8.Toolbox Application to Power Transmission InfrastructureThe developed toolbox was demonstrated using the IAM’s conceptual model for an AM network and transmission infrastructure asset outage data gathered by the Canadian Electricity Association (CEA). The IAM is the international professional body for AM professionals, develops AM knowledge and best practice guidelines, and generates awareness of the benefits of the AM discipline for individuals, organizations, and the wider society (IAM 2021). CEA membership includes generation, transmission, and distribution power utilities and industrial partners from across Canada (CEA 2020). The toolbox was applied in this setting to display its application within the asset-intensive power transmission industry.Project DescriptionThe IAM conceptual model was used for building the infrastructure AM network. To show the connection between important AM subject areas and industry-specific KPIs, a transmission infrastructure asset outage dataset was obtained from the CEA, covering the period from 1978 to 2018. The transmission infrastructure network is critical to the reliable delivery of power from generators to substations and on to customers. Therefore, the effective and efficient management of transmission infrastructure assets is critical for safe and reliable power delivery. The transmission equipment outage data are for equipment operating at high voltages of 60 kV and above (CEA 2018). The outages are recorded for transmission infrastructure components including transmission lines, cables, transformer banks, circuit breakers, synchronous compensators, static compensators, shunt reactor banks, shunt capacitor banks, and series capacitor banks. The KPIs recorded and published by the CEA in its annual report are shown in Table 1 along with a definition of each KPI metric. This demonstration of the toolbox will involve the application of Tools 1 to 3 only, as the data needed for implementation of Tools 4 and 5 are restricted by transmission infrastructure owner/operators for their internal use. As such, the demonstration will focus on describing the utility of the toolbox for the identification of AM subject areas that are most critical to induce systemic risk within an AM system.Table 1. KPIs calculated and published in the CEA’s annual reportTable 1. KPIs calculated and published in the CEA’s annual reportKPIDefinitionFrequency (per 100 km-year )The number of outages divided by km years divided by 100Frequency (per year)The number of outages divided by component yearsNumber of outagesThe number of major component-related forced outagesTotal outage duration (h)Total forced unavailable time (i.e., the time required to completely restore a component to service) of the component-related outagesAverage outage duration (h)Total outage duration divided by the number of outagesMedian outage duration (h)50% of the forced unavailability times are greater than this valueUnavailability (%)The product of frequency and average outage duration in years, expressed as a percentage of the component’s populationNetwork AnalysisBased on available details from the IAM’s conceptual model subject areas and the connections between subject areas as outlined by the Global Forum’s Asset Management Landscape, the adjacency matrix (Table S1 in the Supplemental Materials) was developed (Global Forum on Maintenance and Asset Management 2014). The connections were specified within the report for each subject area as related subjects and artefacts. The colors of the subject areas in the adjacency matrix (Table S1) correspond to the AM divisions from Fig. 2. Tool 1 uses the adjacency matrix to develop the network model shown in Fig. 9. A transmission utility AM system would typically include the AM subject areas from Figs. 2 and 9. The node color refers to the AM division of the subject area. The network is directed, as typically subject areas pass knowledge, information, and policy in only one direction (i.e., from source to target nodes). The link color is the same as the source node. Using color as a distinguishing feature allows for the identification of clusters of AM division-based subject areas. This is shown in Fig. 9, where the Asset Information and Organization & People division subject area nodes are highly interconnected within their clusters. Conversely, the Strategy & Planning division subject areas are not only clustered among themselves, but instead are highly connected with other subject areas.Network modeling is useful for viewing node connections, whereas node-, link-, and network-based centrality analyses are needed to identify the highly dependent nodes/links that induce systemic risk to the AM network. Fig. 10 shows the top 10 subject areas for each of the previously described centrality measures. Of note are the Strategy & Planning division subject areas of Asset Management Strategy and Planning, Asset Management Planning, and Strategic Planning. These areas all rank high for betweenness centrality, degree centrality, and eigenvector centrality. This indicates that, for an organization to implement an effective AM system, it must have a strong AM plan and objective targets. In addition, the Operation and Maintenance Decision-Making and Resourcing Strategy of the Asset Management Decision-Making division were high-ranking subject areas among the centrality measures.The links shown in Table 2 illustrate the critical links contributing to systemic risks within the AM system. The betweenness centrality of each link was found per the network measure previously described. The links are ordered based on their criticality, which indicates their importance to the network functionality. Of note within Table 2, the Resourcing Strategy and Asset Management Planning nodes have multiple important links, suggesting that it is particularly important for these AM subject areas to have excellent communication with the connected AM subject areas.Table 2. Top 10 AM network links by betweenness centralityTable 2. Top 10 AM network links by betweenness centralitySource nodeTarget nodeBetweenness centralityAsset management planningResourcing strategy283Resourcing strategyResource management282Strategic planningAsset management planning248Asset management strategy and objectivesStakeholder engagement241Maintenance deliveryReliability engineering138Operations and maintenance decision-makingMaintenance delivery133Resource managementCompetence management133Stakeholder engagementOperations and maintenance decision-making114Asset management strategy and objectivesStrategic planning112Stakeholder engagementAsset management planning110In addition to the node-based centrality measures, the network-based measures are important to evaluate the overall resilience of the network to potential failures (Barabási 2016). The average degree centrality of the AM subject area network is 3.15 and the network density is 0.08, meaning that only 8% of the potential links of a fully connected network connect the AM subject areas. This implies that the network is vulnerable to systemic risk because, if one or more of the previously identified critical nodes/links were to be disrupted, the AM system would be greatly impacted.Descriptive AnalyticsThree of the critically dependent subject areas, as determined from the node-based centrality analysis in Fig. 10, are used to illustrate the use of descriptive analytics for subject area-paired KPI analysis. Asset Management Strategy and Objectives, Asset Management Planning, and Operations and Maintenance Decision-Making were chosen for illustration because these subject areas ranked high in the centrality importance measures previously analyzed. Three of the 39 subject areas were chosen to illustrate the use of Tool 3 for the sake of brevity, but organizations should employ descriptive analytics to pair each subject area to at least one infrastructure AM KPI. It should be noted that insights into multiple subject areas can be taken from the same figure, as illustrated below.Fig. 11 illustrates the Asset Management Strategy and Objectives subject area focused on developing a long-term plan for managing an organization’s infrastructure assets (IAM 2015). Fig. 11 shows how an organization can see its AM KPIs (e.g., median outage duration in hours and number of outages) compared with other organizations. In this context, the term organization, shown in Fig. 11, indicates anonymized transmission utilities that contributed outage data for all shown years. Fig. 11 depicts the three main sections from which a stakeholder can obtain information after selecting the asset type in the view from the menu (i.e., titled Component). The three sections are described as follows. (1) The top bar graph indicates the KPIs from the period as selected on the side slider menu along with the median KPI value among all organizations over the selected period (e.g., 2014–2018). (2) The line graph shows the changing KPI for each organization (color) over the period indicated by the side slider menu. (3) The bottom point graph is based on the organization selected by the menu (titled Organization) and is broken down by voltage class to highlight the long-duration outage events with an option to select a specific point to display the outage features. Descriptive analytics in this application will allow the key stakeholders to track the objective KPIs while also investigating the long-duration outages to understand and remedy them.Fig. 12 illustrates the KPI related to two important subject areas in Asset Management Planning and Operations and Maintenance Decision-Making. The Asset Management Planning subject area focuses on achieving the AM objectives and identifying related risks arising from previous asset failures, whereas the Operations and Maintenance Decision-Making subject area focuses on ensuring a predictable and acceptable level of service throughout the asset’s life (IAM 2015). Fig. 12 illustrates both subject areas and contains two sections: (1) bar graph showing current year performance compared to previous year performance, which can be changed using the arrow selector on the right side of the dashboard; and (2) sparkline showing the specified subcomponent (selected from the dropdown menu, Subcomponent Group) KPI over the years specified in the side menu (titled Year). This descriptive analytics application allows an organization to monitor the year-over-year changes in average outage duration to analyze previous asset failures and develop mitigation plans to ensure that the issues do not arise again. In addition, the service level and trending performance of the specified asset subcomponent can be monitored to ensure that an acceptable level of service is being provided by that subcomponent and to view the impact of operations and maintenance decisions. For example, the trending performance, indicated by the sparkline, for transformer banks shows a decrease in average outage duration—this would indicate that the operations and maintenance decisions being made are positively affecting the performance. This descriptive analytics application would then be distributed to all AM stakeholders to see the positive impact of their coordinated effort in improving the operations and maintenance decisions.Managerial InsightsThe described network analytics toolbox and the subsequent demonstration yield managerial insights that can reduce the information asymmetry between AM subject areas by targeting corresponding systemic risks within the AM system network. The following subsections coincide with the main AM divisions of Strategy & Planning, Asset Management Decision-Making, Lifecycle Delivery, Asset Information, Organization & People, and Risk & Review. The insights will be presented from the viewpoint of an asset manager within an infrastructure asset-intensive organization and are transferable across infrastructure AM industries. The application of Tools 1 and 2 would be organization-specific, whereas Tools 3 to 5 would have industry-specific KPIs and additional features, as outlined in the respective tool descriptions.Strategy & PlanningBased on the centrality measures analyzed in the demonstration of the AM network, it was shown that some Strategy & Planning subject areas were very important to the successful implementation and operation of an AM system. This provides evidence that the most critical aspect of a successful AM system is the development of a clear and precise strategy and plan to improve information symmetry among stakeholders—e.g., all stakeholders will be guided by the same clearly defined strategy and plan and not overwhelmed by too much information. This also ensures that all other dependent lifecycle stages and decisions that follow will be guided by a clear strategy.Asset Management Decision-MakingKey subject areas, based on the centrality measures, that are important to effective decision making within an AM system are ResourcingStrategy, Capital Investment Decision-Making, Operations and Maintenance Decision-Making, and Lifecycle Value Realization, as shown in Fig. 10. These critical AM subject areas highlight the necessity for all AM stakeholders to be making decisions based on the same information. Descriptive analytics applications showed how stakeholders can stay informed of current information and see the effects of their decisions on the AM KPIs. Fig. 12 illustrates this concept by showing the KPI variation related to the specified AM subject areas, allowing stakeholders to view the impact of their AM decisions on the KPI of the assets.Lifecycle DeliveryThe lifecycle of an asset includes acquisition, operation, maintenance, and disposal. This cycle is continuously operating within an infrastructure asset-intensive organization, as infrastructure assets are at different stages of their lifecycles. As is shown in Fig. 10, the Lifecycle Delivery subject areas are important for the closeness centrality measure, indicating that these subject areas are important to the indirect information flow within the AM system. This means that information is not passed through a direct connection to a node, but through one or multiple other nodes. The nodes that pass information to other nodes typically process such information so that it can be readily used by the following node. For example, Fig. 10 shows that the subject area, Fault and Incident Response, processes the information from the Contingency Planning and Resilience Analysis subject area before passing it onto the Risk Assessment and Management subject area.Asset InformationDigital data related to the KPIs for infrastructure assets are critical to implement the network analytics toolbox. Asset information is most valuable in a digital format so that it can be used to track KPIs and implement the tools to improve the value provided by assets throughout their lifecycles. The Asset Information subject areas are highly connected, as shown in Fig. 9 and Table S1 (Supplemental Materials), indicating that if one were to be disrupted, the others would also be disrupted. An asset manager should note that the design and development of a robust asset information collection, reporting, and storage system are critical to the development of informative results on asset performance, which in turn is necessary to evaluate the effectiveness of the AM plan being implemented. Throughout the lifecycle of an asset, it is critical to consistently collect and store the KPIs associated with an asset’s performance so that the KPIs can be monitored on the timeline tailored to a specific infrastructure asset (e.g., monthly, yearly, or every 5 years). This will enable Tools 3–5 to be deployed effectively to improve the AM system. In addition, having a consistent asset information reporting method enables stakeholders to access to the same information, therefore reducing information asymmetry.Organization & PeopleThe Organization & People subject areas were also clustered in the AM network, suggesting that most subject areas within this division are connected, and any disturbance in one area will affect all the others. The asset manager should use this insight to ensure that all stakeholders are clear about the AM strategy, goals, and implementation plan, to reduce the potential for information asymmetry between stakeholder silos. Stakeholder buy-in to implementing an AM system is critical; strong organizational management is vital so that stakeholders can see the positive effects of information symmetry on infrastructure asset KPIs following the implementation of an AM system.Risk & ReviewThe risk and review process is critical in evaluating the effectiveness of the AM plan within an organization. The use of descriptive analytics allows asset managers to efficiently evaluate the KPIs for the organization’s assets. The targeted descriptive analytics applications to specified critical AM subject areas allow asset managers to concentrate on detailed information quickly to minimize the time needed to search for the result they are looking for. Descriptive analytics also allow for rapid consultation among stakeholders, therefore improving the review process and necessary collaboration. Deploying Tool 3 also allows for automatic updates to occur in the figures so that a stakeholder does not need to continuously update figures for use in reports. This enables all stakeholders to have access to the same information, allowing them to make decisions with the most accurate and up-to-date information.ConclusionGlobal infrastructure assets are continuously deteriorating, and the current condition of infrastructure is poor in both Canada and the United States. To maximize the value of each dollar spent on infrastructure for repair, rehabilitation, replacement, and maintenance, effective and efficient asset management (AM) practices are needed. One of the main challenges in implementing and operating an effective AM system within an organization is dealing with the systemic risks caused by information asymmetry among dependent AM system subject areas. This paper described a network analytics toolbox to identify the systemic risks in AM systems and reduce information asymmetry by using key performance indicator (KPI) analytics paired with the critical AM subject areas. Five tools were presented, as follows: (1) Dependence Identification and Network Modeling; (2) Network Centrality Analysis; (3) Descriptive Analytics of Critical Subject Area-Paired KPI; (4) KPI-Based Predictive Analytics; and (5) Prescriptive Analytics for Optimal Network Configuration. Tool 1 describes how to build an AM network from an organization’s AM system. The connections between the AM system subject areas are used to develop an adjacency matrix, which is used then to build an AM system network. Tool 2 employs node- and network-based centrality measures to determine the most critical nodes to the operation of the AM system network. Tool 3 takes the critical AM subject areas, identified from the node-based centrality measures, and uses descriptive analytics to track KPIs that directly relate to the important AM subject areas. Tool 4 uses historical KPI values and additional influencing features within a machine learning model to predict future KPIs. Tool 5 uses the existing AM network structure and applies optimization to generate the optimal AM system network configuration to minimize the systemic risks.The toolbox was subsequently deployed to demonstrate three of the five tools using the Institute for Asset Management’s conceptual model and transmission infrastructure asset outage KPIs. Critical AM subject areas were identified through the node- and link-based centrality measures, and descriptive analytics were deployed so that transmission utilities would be able to track their KPIs as they directly relate to the important subject areas. The AM subject areas were described using descriptive analytics applications. Key managerial insights for systemic risk identification associated with the AM system and reduction in information asymmetry between AM stakeholders were highlighted as they related to each of the major AM divisions.Understandably, for the tools to be effective, implementation of the tools requires active participation among all AM stakeholders. As in all data-driven models, high-quality input data are necessary to achieve a useful output. Specific, and expected, limitations of Tool 1 relate to its dependency on an organization’s record-keeping of its AM system and/or an organization’s level of understanding of how the AM subject areas are linked together. For Tool 2, if any component (i.e., node or link) changes due to organizational restructuring, then all centrality values would need to be revised. Tool 3 requires a infrastructure industry-specific expert to pair AM subject areas with relevant KPIs, and there must be relevant data to generate KPIs within the existing database. Tool 4 is influenced by the features available for inputs to the machine learning model (e.g., if all available features are categorical then there is a limited number of machine learning models that can be used, and the output will also be categorical). Finally, the ability of Tool 5 to provide an exact (or near exact) solution might be affected by the complexity of the objective function and constraints, indicating that users might resort to heuristics, for example, to reach a solution. In addition, it is expected that organizations would implement the described tools in sequential order as indicated, and become comfortable with using each tool before implementing the next tool. By adopting the toolbox presented in this study, it is expected that stakeholders can reduce the systemic risks within an AM system using AM subject area-specific Tool 3 outputs that display information from a centralized database, thus ensuring that an AM subject area’s information is not siloed from the overall system. This also ensures that AM stakeholders make decisions using a consistent information source, reducing the likelihood of stakeholders acting in silos and causing information asymmetry.Data Availability StatementSome or all data, models, or code used during the study were provided by a third party. (i.e., transmission line outage data). Direct request for these materials may be made to the provider as indicated in the Acknowledgments. The data used to build the AM network were retrieved from IAM (2015) and Global Forum on Maintenance and Asset Management (2014).AcknowledgmentsThe data for this study were provided by the Canadian Electricity Association (CEA), and CEA support in the development of this study is greatly appreciated. The financial support for the study was provided through the Canadian Nuclear Energy Infrastructure Resilience under Systemic Risk (CaNRisk)–Collaborative Research and Training Experience (CREATE) program of the Natural Science and Engineering Research Council (NSERC) of Canada. The support of the INTERFACE Institute and the INViSiONLab is also acknowledged in the development of this study. In addition, the authors thank the anonymous reviewers for their detailed comments in helping to clarify the systemic risk definition.References Abdelfatah, M., M. El-Shimy, and H. M. Ismail. 2013. “Outage data analysis of utility power transformers based on outage reports during 2002–2009.” Int. J. Electr. Power Energy Syst. 47 (1): 41–51. Aggarwal, C. C. 2015. Data mining: The textbook. New York: Springer. ASCE. 2021. 2021 report card for America’s infrastructure. Reston, VA: ASCE. Barabási, A.-L. 2016. Network science. Cambridge, UK: Cambridge University Press. Barão, A., J. B. de Vasconcelos, Á. Rocha, and R. Pereira. 2017. “A knowledge management approach to capture organizational learning networks.” Int. J. Inf. Manage. 37 (6): 735–740. Barker, K., J. H. Lambert, C. W. Zobel, A. H. Tapia, J. E. Ramirez-Marquez, L. Albert, C. D. Nicholson, and C. Caragea. 2017. “Defining resilience analytics for interdependent cyber-physical-social networks.” Sustainable Resilient Infrastruct. 2 (2): 59–67. Bergh, D. D., D. J. Ketchen, I. Orlandi, P. P. M. A. R. Heugens, and B. K. Boyd. 2019. “Information asymmetry in management research: Past accomplishments and future opportunities.” J. Manage. 45 (1): 122–158. Black, J., A. Hoffman, T. Hong, J. Roberts, and P. Wang. 2018. “Weather data for energy analytics: From modeling outages and reliability indices to simulating distributed photovoltaic fleets.” IEEE Power Energy Manage. 16 (3): 43–53. Canadian Infrastructure Report Card. 2019. Monitoring the state of Canada’s core public infrastructure: The Canadian infrastructure report card 2019. Ottawa: Canadian Construction Association. CEA (Canadian Electricity Association). 2018. Instruction manual for reporting component forced outages for transmission equipment. Ottawa, ON: CEA. Chen, L., T. F. P. Henning, A. Raith, and A. Y. Shamseldin. 2015. “Multiobjective optimization for maintenance decision making in infrastructure asset management.” J. Manage. Eng. 31 (6): 04015015. Crowther, K. G. 2008. “Decentralized risk management for strategic preparedness of critical infrastructure through decomposition of the inoperability input-output model.” Int. J. Crit. Infrastruct. Prot. 1 (3): 53–67. Dehghanian, P., B. Zhang, T. Dokic, and M. Kezunovic. 2019. “Predictive risk analytics for weather-resilient operation of electric power systems.” IEEE Trans. Sustainable Energy 10 (1): 3–15. Delen, D., G. Moscato, and I. L. Toma. 2018. “The impact of real-time business intelligence and advanced analytics on the behaviour of business decision makers.” In Proc., 2018 Int. Conf. on Information Management and Processing, 49–53. New York: IEEE. Eisenberg, D. A., J. Park, and T. P. Seager. 2020. “Linking cascading failure models and organizational networks to manage large-scale blackouts in South Korea.” J. Manage. Eng. 36 (5): 04020067. Estrada, E., and P. Knight. 2015. A first course in network theory. Oxford, UK: Oxford University Press. Ezzeldin, M., and W. E. El-Dakhakhni. 2019. “Robustness of Ontario power network under systemic risks.” In Sustainable and resilient infrastructure, 1–20. New York: Taylor. Global Forum on Maintenance and Asset Management. 2014. “The asset management landscape.” In Proc., Global Forum on Maintenance and Asset Management. Oakleigh, Australia: Asset Management Council. Goforth, E., M. Ezzeldin, W. El-Dakhakhni, L. Wiebe, and M. Mohamed. 2020. “Network-of-networks framework for multimodal hazmat transportation risk mitigation: Application to used nuclear fuel in Canada.” J. Hazard. Toxic Radioact. Waste 24 (3): 04020016. Goldberg, D. E. 1989. “Genetic algorithms in search, optimization and machine learning.” In Optimization, and machine learning. Boston, MA: Addison-Wesley Longman Publishing Co. Golightly, D., G. Kefalidou, and S. Sharples. 2018. “A cross-sector analysis of human and organisational factors in the deployment of data-driven predictive maintenance.” Inf. Syst. e-Bus. Manage. 16 (3): 627–648. Graybeal, P., M. Franklin, and D. Cooper. 2018. “Principles of accounting.” In Managerial accounting. Houston, TX: OpenStax. Haggag, M., A. Yorsi, W. El-Dakhakhni, and E. Hassini. 2021. “Infrastructure performance prediction under climate-induced disasters using data analytics.” Int. J. Disaster Risk Reduct. 56 (Nov): 102121. Hastie, T., R. Tibshirani, and J. Friedman. 2009. “The elements of statistical learning.” In Springer series in statistics. New York: Springer. Heng, F.-L., K. Zhang, A. Goyal, H. Chaudhary, S. Hirsch, Y. Kim, M. A. Lavin, and A. Raman. 2016. “Integrated analytics system for electric industry asset management.” IBM J. Res. Dev. 60 (1–2): 1–12. Herrera, F., G. Chan, M. Legault, R. M. Kassim, and V. Sharma. 2011. The digital workplace: Think, share, do. London: Deloitte & Touche LLP. Huang, K., M. Li, and S. Markov. 2019. “The information asymmetry between management and rank-and-file employees: Determinants and consequences.” In Proc., AAA 2019 Management Accounting Section (MAS) Meeting. 1–53. Miami: Social Science Research Network. IAM (Institute for Asset Management). 2015. Asset management—An anatomy Ver3. Bristol, UK: Institute for Asset Management. Infrastructure Canada. 2018. Investing in Canada: Canada’s long-term infrastructure plan. Ottawa, CA: Infrastructure Canada. Koziel, S., P. Hilber, P. Westerlund, and E. Shayesteh. 2021. “Investments in data quality: Evaluating impacts of faulty data on asset management in power systems.” Appl. Energy 281 (Nov): 116057. Lee, S. H., J. Y. Choi, S. H. Yoo, and Y. G. Oh. 2013. “Evaluating spatial centrality for integrated tourism management in rural areas using GIS and network analysis.” Tourism Manage. 34 (Feb): 14–24. McDowell, T., H. Horn, and D. Witkowski. 2016. Organizational network analysis. Seattle: Deloitte. Mohamad, A., Y. Zainuddin, N. Alam, and G. Kendall. 2017. “Does decentralized decision making increase company performance through its information technology infrastructure investment?” Int. J. Account. Inf. Syst. 27 (Oct): 1–15. Mukherjee, S., R. Nateghi, and M. Hastak. 2018. “A multi-hazard approach to assess severe weather-induced major power outage risks in the U.S.” In Reliability engineering and system safety, 283–305. New York: Elsevier. Opdyke, A., F. Lepropre, A. Javernick-Will, and M. Koschmann. 2017. “Inter-organizational resource coordination in post-disaster infrastructure recovery.” Construct. Manage. Econ. 35 (8–9): 514–530. Pell, R., R. Svoboda, R. Eagar, P. Ondko, and F. Kirschnick. 2015. Effective infrastructure asset management—A holistic approach to transformation. Hong Kong: Arthur D. Little. Prajogo, D., J. Toy, A. Bhattacharya, A. Oke, and T. C. E. Cheng. 2018. “The relationships between information management, process management and operational performance: Internal and external contexts.” Int. J. Prod. Econ. 199 (May): 95–103. Qiu, Q., J. A. Fleeman, D. R. Ball, G. Rackliffe, J. Hou, and L. Cheim. 2013. “Managing critical transmission infrastructure with advanced analytics and smart sensors.” In Proc., IEEE Power and Energy Society General Meeting, 1–6. New York: IEEE. Ross, R. 2019. Reliability analysis for asset management of electric power grids. New York: Wiley. Salama, M., M. Ezzeldin, W. El-Dakhakhni, and M. Tait. 2020. “Temporal networks: A review and opportunities for infrastructure simulation..” In Sustainable and resilient infrastructure, 1–16. New York: Taylor & Francis. Talebiyan, H., and L. Duenas-Osorio. 2020. “Decentralized decision making for the restoration of interdependent networks.” J. Risk Uncertainty Eng. Syst. 6 (2): 04020012. Thai, M. T., and P. M. Pardalos. 2012. Handbook of optimization in complex networks: Communication and social networks. New York: Springer. Uddin, W., W. R. Hudson, and R. Haas. 2013. Public infrastructure asset management. New York: McGraw-Hill Education. Ujwary-Gil, A. 2019. “Organizational network analysis: A study of a university library from a network efficiency perspective.” Library Inf. Sci. Res. 41 (1): 48–57. Weldon, W. 2008. “Johnson & Johnson CEO William Weldon: Leadership in a decentralized company.” In Proc., Wharton Leadership Conf. Philadelphia: Wharton Univ. of Pennsylvania. Wexler, S., J. Shaffer, and A. Cotgreave. 2017. The big book of dashboards. Hoboken, NJ: Wiley. Xerri, M. J., S. Nelson, and Y. Brunetto. 2015. “Importance of workplace relationships and attitudes toward organizational change in engineering asset-management organizations.” J. Manage. Eng. 31 (5): 04014074. Yang, S., W. Zhou, S. Zhu, L. Wang, L. Ye, X. Xia, and H. Li. 2019. “Failure probability estimation of overhead transmission lines considering the spatial and temporal variation in severe weather.” J. Mod. Power Syst. Clean Energy 7 (1): 131–138. Zhou, K., C. Fu, and S. Yang. 2016. “Big data driven smart energy management: From big data to big insights.” In Renewable and sustainable energy reviews, 215–225. New York: Elsevier. Zumel, N., and J. Mount. 2020. Practical data science with R. Shelter Island: Manning Publications.

Source link

Leave a Reply

Your email address will not be published.