CIVIL ENGINEERING 365 ALL ABOUT CIVIL ENGINEERING



IntroductionTransportation networks play a fundamental role in the functioning of societies. Whether a local road network or a widespread national grid, these networks provide the necessary framework for economic and societal development and so need to accommodate a prescribed level of service (LOS). The LOS is a measure used to determine how well an infrastructure operates (Adey et al. 2019; TRB 2016). Disruptive events such as floods, storms, and earthquakes may significantly reduce the functionality and service provided by the infrastructure. The US Presidential Policy Directive (PPD) on critical infrastructure security and resilience (Obama 2013) defines resilience as “the ability to prepare for and adapt to changing conditions and withstand and recover rapidly from disruptions. Resilience includes the ability to withstand and recover from deliberate attacks, accidents, or naturally occurring threats or incidents.” Consequently, the resilience of infrastructure is dependent on its response during and after a disruptive event—in other words, how much the LOS will drop and how fast (and cost-efficiently) it can be recovered (Fig. 1). Infrastructure managers should implement provisions to reduce the probability of service disruptions if a disruptive event happens (i.e., mitigate or reduce the service drop between t0 and t2). They should also adopt an intervention program to minimize the time and cost of restoring the LOS in case of service disruption. These programs can be developed once it is determined which assets are damaged and the extent to which they are damaged.A comprehensive postdisaster restoration program should indicate the level of recovery, the speed of the restoration, and the sequence in which the damaged road sections, bridges, tunnels, and so forth, making up the network are to be restored. This is more than a simple ordering of damaged assets, which often occurs. Relevant studies focusing on the postdisaster restoration of infrastructure are summarized in Table 1.Table 1. Relevant studies on postdisaster identification of restoration programsTable 1. Relevant studies on postdisaster identification of restoration programsEventStudy focusReferenceEarthquakeInfrastructure risk management and intervention optimizationGomez and Baker (2019)Restoration of complex infrastructure networksMorshedlou et al. (2019)Restoration strategies for bridgesBocchini and Frangopol (2012), Tao and Wang (2019)Restoration of road networksChen and Tzeng (2000)Restoration of lifeline systemsIsumi et al. (1985)Restoration of water distribution systemsLuna et al. (2011)Optimization of restoration program for electric power systemsÇagnan and Davidson (2004), Tan et al. (2019), Xu et al. (2007)TornadoesDevelopment of building restoration functionsKoliou and van de Lindt (2020)Restoration strategies for supply-chain infrastructure elementsRamachandran et al. (2015)FloodRestoration of roadway networksHackl et al. (2018)Restoration of highway networksLertworawanich (2012)Hurricanes, ice stormsEstimation of restoration time for electric power systemsLiu et al. (2007)General natural and man-made hazardsRestoration program for infrastructure networksHu et al. (2016), Safapour et al. (2020)Restoration of critical infrastructureFang and Sansavini (2019), Vugrin et al. (2010)Restoration of transportation networksChen and Miller-Hooks (2012), Fang and Sansavini (2017), Liao et al. (2018), Vugrin et al. (2014)Optimal scheduling of emergency roadway repairHayat et al. (2019), Yan and Shih (2009)Restoration of bridges along highwaysBocchini and Frangopol (2012)Ranking of repair schedules for water distribution systemBałut et al. (2019)Restoration under uncertainty for power gridsFang and Sansavini (2019)In general, researchers have used two main approaches to developing restoration programs. Some have focused on prioritizing the restoration of damaged infrastructure assets using simple equations and rules based on economic or engineering criteria, such as prioritization based on damage level (Buckle et al. 2006), average daily traffic volume (Miller 2014), or relative importance or criticality (Liu et al. 2022; Scott et al. 2006). These approaches are mostly used in real-world practices and are time-efficient; however, they rarely result in optimal solutions.Others have focused on finding an optimal solution to an objective function that minimizes the consequences of disruptive events. They have used several different objective functions to determine the optimal restoration program, such as minimization of travel time (Chen and Tzeng 2000), minimization of lost trips (Chen and Miller-Hooks 2012), and connectivity loss (Adewole et al. 2012), minimization of direct costs of interventions (Bocchini and Frangopol 2012), and minimization of overall costs (Hackl et al. 2018). Since models for solving these objective functions are normally very complex and computationally intensive, finding the optimal solution (global optimum) takes a long time. As a result, the objective functions are often minimized using heuristic models such as genetic algorithms (GAs) (Chen and Tzeng 1999), simulated annealing (Hackl et al. 2018), or ant colony systems (ACSs) (Yan and Shih 2012).The efficiency of heuristic algorithms for identifying optimal restoration programs depends on computation time and the closeness of the solution to the global optimum, which is problem-specific and depends on the objective function and the network under study. For this reason, it is not possible to compare the performance of various algorithms based on current studies since these studies have looked at different networks and objective functions. So far, researchers have studied single heuristic algorithms for minimizing objective functions. Consequently, it is difficult to determine how well-suited they are for a specific problem.Moreover, these models have been mostly tested on small networks with fewer than a dozen edges. In the few studies where they were tested on real-world networks, the time required to develop an optimal restoration program was unacceptable (Hackl et al. 2018; Orabi et al. 2009). This was due to the increase in indirect costs with delays in the execution of the restoration. Therefore, there is still a need for developing an approach that can determine the optimal restoration intervention program, in a short period of time.The study described in this paper fills the just mentioned research gaps and advances the state of the art in postdisaster restoration of transportation networks as follows: •It investigates the efficiency of common heuristic algorithms in minimizing the objective function of a restoration model that considers the overall direct and indirect costs of restoring road networks after disruptive events, and identifies the most suitable algorithms for this problem.•Introduces a novel double-stage optimization approach that overcomes the inefficiencies of both existing approaches—prioritization rules, and optimization.The remainder of this paper is organized as follows: First, the mathematical model to identify the optimal postdisaster restoration programs is presented. This is followed by a review of commonly used algorithms for combinatorial problems and their advantages and limitations, as well as selection of the three most promising algorithms to solve the optimization problem. The suggested algorithms are then implemented in a real-world case study and the results are compared with the single-stage optimization approach and an approach based on prioritization rules.Mathematical Model for Double-Stage Optimization of Restoration ProgramsIn this approach, to identify the optimal postdisaster restoration programs for transportation networks, the damaged road sections, bridges, tunnels, and so forth, that are the most critical in the network, such as lifelines, links with heavy traffic loads, and damaged assets that cause a loss in connectivity, are identified. This can be done using expert opinion, prioritization rules (Buckle et al. 2006; Miller 2014), or measures to identify the importance or criticality of the links in a network (Liu et al. 2022; Scott et al. 2006).The optimization procedure is split into two stages using a mathematical model as follows: •Stage 1 takes all damaged network assets and minimizes the sum of the direct and indirect costs from the time the disruptive event takes place to the time all network-critical assets are restored to a functioning level. The output of this stage is the first restoration plan. The restoration work can start immediately after this stage.•Stage 2: updates the status of the network with restored critical assets, and the model minimizes the overall direct and indirect costs to restore the remaining damaged assets.The objective function is the same for both stages, and only the inputs are changed. The proposed model has the flexibility to incorporate constraints, including limits on the available budget and resources, intervention level, and varying traffic assignments during the restoration program. Execution of double-stage optimization benefits from the merits of commonly used approaches based on simple prioritization rules and single-stage optimization models, but overcomes their limitations. This approach rapidly develops the initial restoration plan and minimizes the delays between the occurrence of the disruptive event and the execution of the interventions, which results in significant savings in the indirect costs.The objective function of the restoration model and the procedure for calculation of the direct and indirect costs are described in the following sections.Objective FunctionAs this paper mainly focuses on the short-term impacts of one or more local interruptions, the overall costs are only those related to the immediate impacts on infrastructure and traffic. Consequently, the objective function of the suggested restoration model for both stages [Eq. (1)] aims at minimization of the overall short-term direct (CDC) and indirect costs (CIC)(1) Direct CostsDirect costs include the expenses incurred by the restoration interventions, including cleanup, preparation, and rehabilitation or reconstruction of damaged objects. The total direct costs are calculated as expressed in Eq. (2) (Hackl et al. 2018) (2) CDC=∑n∈Ns∑i∈I(n∨s)∑t∈Tδn,i,t·Cn,iwhere for each damaged object n only one intervention i can be assigned at time t. The binary variable δn,i,t=1 if at time t intervention i is executed on object n with a damage state s; else, δn,i,t=0. The execution costs of each intervention Cn,i [Eq. (3)] are the sum of the fixed ϵ (e·g.,building site facilities), variable ζ (e.g., costs dependent on the length of an asset), and resource-related η (e.g., labor and construction machinery) costs (3) Cn,i=ϵn,i+ζn,i+ηn,i∀  n,iIt is possible to add constraints to the model to ensure that the costs of interventions do not exceed the available budget or time or that the model does not select more work resources than available, at time t.Indirect CostsThe indirect costs CIC considered in this study are related to the immediate reductions in service such as loss of connectivity Λ or the travel prolongation Π. These costs are calculated by deducting the indirect costs at time t from the indirect costs at t=0 when the disruptive event has yet to happen [Eq. (4)] (4) CIC=∑t∈T[∑p∈Pods,e∈P[Π(t|xe,t)−Π0(t|xe,0)]+∑p∈PodgΛ(t)]where Pods⊆Pod = set of od paths with some possible flow and does not contain any objects with zero functional capacity (state g). The costs due to connectivity loss Λ(t), are estimated by evaluating the number of lost trips at time tfod,t(P) and the costs related to reductions in labor productivity υ at that time (Freeman 2008) (5) Λ(t)=fod,t(P)·υ(t)∀  t,P∈PodgThe travel prolongation costs are linked to traffic flow on the edges and include the costs due to extra time spent on traveling Φ and the related increase in vehicle operating costs Υ [Eq. (6)] (6) Π(t|xe,t)=Φ(t|xe,t)+Υ(t|xe,t)∀  t,e∈P,P∈Podswhere Φ(t|xe,t) and Υ(t|xe,t) can be estimated from Eqs. (7) and (8), respectively (7) Φ(t|xe,t)=te,t(xe,t)·∑w∈Wμe,w·ξe,wwhere μe,w = percentage of vehicle type w on edge e; and ξe,w = travel costs per time unit. The vehicle operating costs are fuel consumption and vehicle maintenance, and are estimated as follows (Adey et al. 2012): (8) Υ(t|xe,t)=xe,t·le·∑w∈Wμe,w·(ν·Fw+ρw)where le = length of each edge; Fw and ν = average fuel consumption and average fuel price, respectively; and ρw = operating and maintenance costs for each vehicle type without considering fuel consumption. The link flow xe,t is determined by solving a user equilibrium assignment as indicated in Eq. (9) (9) xe,t∈minZT=∑e∈εs∫0xe,tCeT(ω)dωThe equilibrium can only be attained when travellers cannot lower their travelling costs by changing routes individually (Beckmann et al. 1955; Nagurney and Zhang 2007). The cost-flow relationship CeT can be determined from the function suggested by the Bureau of Public Roads (BPR 1964). In this function, the travel cost is determined by traveltime/unitdistance, as they are directly linked to traffic flow on an edge [Eq. (10)] (10) CeT≔te,t(xe,t)=te0(1+αe(xe,tye,t)βe)where te0 = free-flow travel duration; te,t = travel duration in each period t and edge e with traffic flow xe,t; ye,t = edge capacity; and α and β = calibration parameters.It is possible to consider a weighting factor γ for the indirect costs, due to the high level of uncertainty that is normally associated with quantifying the indirect costs. Using this factor enables decision-makers to select the extent of influence the indirect costs will have on the choice of the restoration programs (Hackl et al. 2018).Optimal SolutionAs observed in Eqs. (1) and (9), the identification of an optimal postdisaster restoration program, for both stages, is a bilevel optimization problem, where minimization of the overall intervention costs [Eq. (1)] or upper-level optimization depends on traffic assignment [Eq. (9)]. Lower-level optimization can be solved by a traffic model of choice such as a conjugate direction Frank-Wolfe algorithm. Upper-level optimization is a combinatorial problem (Hackl et al. 2018), and the optimal restoration program is selected from a set of various possible combinations of restoration interventions over a specific period. This problem is computationally complex, nondifferentiable, and nonconvex. Additionally, since indirect costs increase with time, computation speed has a significant influence on the efficiency of the selected algorithm.As different algorithms approach the search for the optimal solution differently, their efficiency is problem-specific and varies depending on the objective function and the number of feasible solutions. Presently no research has been done on the most efficient heuristic algorithms for identifying optimal postdisaster intervention programs. In the following section, common heuristic algorithms suitable for combinatorial problems are introduced and their efficiency for the proposed double-stage restoration model is compared.Commonly Used Heuristic Algorithms for Combinatorial ProblemsAlgorithms for finding optimal intervention programs can be divided into exact and heuristic algorithms. An exact algorithm guarantees to find the global optimal intervention program while a heuristic algorithm will find a good (near optimal) intervention program, although it might not be the global optimum. For computationally complex, nondifferentiable, and nonconvex problems, heuristic algorithms have the advantage of having shorter computation times, which makes them more suitable for postdisaster decision-making, as delays can influence indirect costs. Table 2 presents a short overview of common heuristic algorithms suitable for combinatorial problems.Table 2. Comparison of commonly used heuristic algorithms for combinatorial problemsTable 2. Comparison of commonly used heuristic algorithms for combinatorial problemsAlgorithmMethodAdvantagesLimitationsReferenceBranch and bound1.Forms set of solutions as rooted tree.2.Explores tree branches, which present subsets of solution set.3.Checks upper and lower bounds of solution for each branch and discards it not possible to find a better solution than those identified so far; if solution is promising, enumerates branch candidate solution.Finds optimal solution. Suitable for problems with fewer search points.Efficiency dependent on accuracy of estimated lower/upper bounds of branches. If estimation not possible, algorithm performs exhaustive search (very slow for large networks).Carpaneto and Toth (1980), Carrabs et al. (2013)Nearest neighbor1.Selects random point.2.Finds nearest (lowest-cost) unvisited point and goes there.3.Repeats Step 2 if any unvisited points left.4.Returns to first point.solution.Very simple heuristic, especially for TSP.Prior knowledge of costs from one point to all other points required.Hurkens and Woeginger (2004), Nilsson (2003), Rosenkrantz et al. (1977)Stops when a solution found and does not try to improve it.Greedy heuristic1.Sorts all edges.2.Selects shortest unselected edge (with lowest cost); adds it to tour.3.Determines if N edges in tour;if not, repeats Step 2.More efficient than nearest neighbor in finding optimal solution.Prior knowledge of costs for all possible edges required.Geng et al. (2011), Nilsson (2003)Stops when solution found; does not try to improve it.solution.Tabu search1.Checks its immediate neighbors to find better solution.2.Allows moves with negative gain if positive one cannot be found.3.Tabu-list created based on moving to immediate neighborhood where that move will never be implemented again, while it is on the list unless it provides a better solution.solution.Avoids being stuck in local optimum.Long computation time.Knox (1994), Malek et al. (1989), Nilsson (2003)Simulated annealing (SA)1.Starts from random point and calculates value of objective function Z, also known as initial state.2.Applies random shifts based on selected neighborhood and computes ΔZ.3.Accepts new point or stays with initial state; If improved, selects solution; If not, accepts it with probability of e(−ΔZT), where T= algorithm temperature. During search, temperature is gradually decreased from initial positive value to value near zero.solution.Provides complexity–efficiency balance so it outperforms other commonly used algorithms.Embedded parameters, e.g., acceptance probability function, initial/end temperature, annealing schedule significantly influence effectiveness, but no general way to find best parameters for given problem.Geng et al. (2011), Hackl et al. (2018), Malek et al. (1989), Nilsson (2003)Genetic algorithm (GA)1.Randomly generates population of feasible solutions.2.Created population (feasible solutions) mate, produce next generation; some undergo mutation. Solutions excellence evaluated using fitness value.3.Selects the fittest candidates for mating and mutation and hence, increases the overall fitness of the solutions.solution.Parameters relatively easy to adjust.Computationally expensive.Chen and Tzeng (1999), Grefenstette et al. (1985), Nilsson (2003), Potvin (1996)Ant colony (AC)1.Group of ants start from various points and move to new point.2.Ants leave pheromone trail proportional to inverse of length (cost) of tour.3.Ants select path with strongest pheromone trail; process repeated until short enough (lowest-cost) tour is found.solution.Quickly determines optimal solutions to small problems.Requires prior knowledge of costs from one point to all other points.Dorigo and Gambardella (1997), Nilsson (2003), Yan and Shih (2012)Particle swarm optimization (PSO)1.Uses individual-particle physical movements.2.Particle movements controlled by local best-known solution and ushered toward best-known solution in search space; search space updated as better solutions found.3.Step-2 procedure prompts swarm to move toward best solutions.Well-balanced mechanism to enhance/adapt to global/local exploration abilities.c1 and c2 need tuning to get optimal results in relatively short time, but choices are problem-specific.Clerc (2004), Wang et al. (2003)Relatively fast, easy to implement.By comparing the algorithms in Table 2, it can be deduced that algorithms that are very easy and fast to use in most combinatorial problems are not necessarily fast in finding the optimal restoration programs. For example, tour construction algorithms such as nearest neighbor, greedy heuristic, and tour data structure, algorithms such as ant colony require information on the distance (costs) from one point to all other points. While this can be a very simple task for problems such as the traveling salesman (Halim and Ismail 2019), it is not the case for the presented restoration model. This is due to the existence of indirect costs that depend on the traffic model; hence, one should run the traffic model for each set separately, which will significantly increase computation time. Such algorithms are not efficient for this problem. Branch and bound and tabu search are also not efficient for large networks due to the related computational time (Knox 1994). Consequently, the authors have selected three promising algorithms like simulated annealing (SA), genetic algorithms (GAs), and particle swarm optimization (PSO) to compare their potential in identifying optimal restoration programs after a disruptive event.Simulated AnnealingSimulated annealing (SA) is effective in finding global optima in the presence of multiple local optima. This algorithm is usually used in discrete but very large search spaces. Fig. 2 presents the flowchart for SA. In addition to the number of iterations, this algorithm involves the adjustment of some problem-dependent parameters, such as the cooling schedule, the number of steps, and neighborhood selection (Hackl et al. 2018; Henderson et al. 2003; Siddique and Adeli 2016).Genetic AlgorithmThe genetic algorithm is a heuristic inspired by the theory of natural evolution. It starts with the formation of the initial population that comprises random restoration programs (i.e., random orders in which interventions are executed on damaged objects). In identifying the optimal restoration program, the fitness of each individual (program) is the inverse of its total costs. Consequently, a mating pool is formed based on the fitness score of each program, meaning that the ones with higher fitness have more chances to be selected for mating. For each pair of parents, a crossover point is randomly selected within the genes that represent the damaged objects and their restoration level, and the next generation is created by swapping the genes of parents until the crossover point is reached. Mutation subjects some of the individuals in the new generation to gene flips with a low probability. This is done to preserve population diversity and avoid premature convergence. The parameters involved in this algorithm are population size, elite size, number of generations, and the mutation and crossover rates that are problem-specific. Fig. 3 provides a flowchart of this algorithm.Particle Swarm OptimizationParticle swarm optimization (PSO) is a population-based algorithm where first some random solutions are initialized. The algorithm then searches for the optimal solution by updating the direction of movement in the search space, meaning that the particles (potential solutions) move through the search space by following the existing best available solutions. This algorithm requires adjustment of some parameters, including the number of particles, number of iterations, the individual c1, social c2, and inertia w coefficients. These parameters are problem-dependent and need tuning to find near optimal solutions in a timely manner. In general, as the individual coefficient c1 approaches 0, the particles converge to the local/global optimum faster. On the other hand, as the social coefficient c2 approaches zero, the particles just search their vicinity. Fig. 4 illustrates the pseudocode for a discrete PSO that is suitable for combinatorial problems.ResultsThe results of the double-stage restoration model using SA, GA, and PSO were compared with those of two other commonly used approaches: single-stage optimization and prioritization rules. For single-stage optimization, the same algorithms (SA, GA, and PSO) were used and the restoration programs were developed for all damaged objects in a single stage. The prioritization rule adopted for this study used a combination of damage level and importance of each object in the network. In this approach, the damaged objects were sorted based on average traffic flow, and the objects that caused loss of connectivity in the network were restored first; the remaining objects were restored afterward. Table 6 summarizes the results of the three approaches for the canton of Grisons after a flood event with a 500-year return period.Table 6. Direct and indirect costs of network restoration after an extreme flood eventTable 6. Direct and indirect costs of network restoration after an extreme flood eventApproachStateAlgorithmDirect costs (mu)Indirect costs (mu)Total costs (mu)Approximate running time (h)Double-stage optimizationStage 1SA781,7313,199,4473,981,1780.25GA781,7313,199,4473,981,1782.5PSO781,7313,199,4473,981,1781.25Stage 2SA2,941,655492,6643,434,31913–17GA2,995,591562,5913,558,08855–60PSO2,941,655467,8413,409,49611–17OverallSA3,723,3863,692,1117,415,49713–17GA3,777,3223,762,0387,539,26658–63PSO3,723,3863,667,2887,390,67412–18Single-stage optimizationWithout delayaSA3,725,0033,573,4817,298,48413–17GA3,897,0253,809,7057,706,73057–62PSO3,725,0033,452,8247,177,82712–17With delaybSA3,725,0033,905,6907,630,69313–17GA3,897,0244,541,9158,438,93957–62PSO3,725,0033,790,7117,515,71412–17Prioritization rule——3,916,0253,868,6627,784,6870.10According to the results in Table 6, the SA is more suitable for the first stage of the proposed approach, where the search space is smaller (four critical objects). All algorithms reach the same solution, although SA is much faster than the other two. For the second part, however, PSO seems to be more suitable. The running time for the SA with 100 steps and 100 updates (10,000 tests in total) ranged 13–17  h. The GA with a population size of 25 and 200 generations (5,000 tests) took 55–60  h and the PSO with 10 particles and 100 iterations (1,000 tests) took approximately 11–17  h. The significant variations in the running time for each algorithm are due to variations in server traffic.It can be observed that SA is the fastest algorithm for running a single test; however, PSO, although slower than SA, reaches a slightly better solution with significantly fewer tests (10,000 versus 1,000). Hence, for reaching the near optimal solution, both methods take almost the same amount of time. It is expected that in larger networks the running time for PSO will be lower than that for SA since it converges to the near optimal solution with a significantly fewer number of iterations. GA is relatively slow; in 5,000 tests it did not reach the solutions SA and PSO produced. The variations in results are mainly due to the indirect costs; particularly, SA and PSO have the same direct costs and the small difference in overall costs is due to the indirect consequences.To assess the efficiency of the proposed approach, the results were compared with the two other commonly used approaches. The overall cost of using simple prioritization rules is 7,784,687, which is higher than the solutions proposed by all three algorithms in double optimization. In terms of time efficiency, both approaches are very fast. Double optimization provides the first restoration plan within 15 min (using SA), and while the restoration work is being carried out on the critical objects in the network the PSO algorithm runs to find the optimal restoration plan for the remaining objects in the network. The time to develop the restoration program using prioritization rules is less than 10 min.Single optimization (without delay) takes significantly longer to develop the restoration plan for work crews; however, the overall costs determined by SA and PSO are lower than those for the other two approaches. Nevertheless, these values do not reflect the actual imposed costs, since the additional time spent on running the algorithms adds to the indirect costs. For this reason, the minimum required running time was considered and the delay was added to the simulations as a constraint. The updated costs (with delay) suggest that the SA and PSO solutions using single optimization are still better prioritization rules. However, all three algorithms with the double-stage optimization outperform single optimization when the delays are considered.Fig. 8 illustrates the near optimal intervention level and sequence of restoring damaged network assets. The red interventions (Days 1–11) are related to the first stage of the optimization using SA; and the tan interventions (Days 11–36) belong to the second stage using PSO. On the upper part of each program, the components of the direct and indirect costs are demonstrated; at the bottom, system recovery over time is shown. Restoration is completed in 36 days. In the program, low-priority interventions are chosen to repair noncritical objects in the network. This is done to speed up network recovery and reduce imposed indirect costs.Fig. 9 illustrates development of single-stage restoration using PSO with and without the one-day delay (i.e., 12–17  h to develop the restoration programs using PSO, which results in a one-day delay for the work crews to start the restoration work). It can be observed that the only difference between these two programs (with and without delay) is the indirect costs caused by delays in starting the restoration work. The developed program confirms that to reach near optimal solutions it is best to first restore critical objects. For example, it can be observed that the bridge with major damage, 2052 (1B in Fig. 9) is the reason for the loss of connectivity in the network; as soon as it is restored, the connectivity loss is eradicated. Early and fast restoration of this object can reduce the indirect costs incurred by travel prolongations or lost trips. Consequently, a high-priority intervention (Level 1) was chosen to restore this object. Moreover, the major travel prolongation is caused by the bridge with minor damage, 2042 (2B in Fig. 9), which is the second object repaired in the restoration program optimized by PSO.ConclusionsTo make optimal decisions for postdisaster restoration of infrastructure systems where the consequences are severe, advanced models are required that consider complex spatial and temporal relationships. For increasing complexity and number of objects, traditional models are not sufficient, and new approaches have to be used to find practical (near optimal) solutions. This paper proposed a novel double-stage optimization to identify near optimal postdisaster restoration programs for road networks, using three promising heuristic algorithms: simulated annealing, genetic algorithms, and particle swarm optimization.The approach was tested on a real-world case; a road network in Grisons, Switzerland after a flood event with a 500-year return period. The efficiency of the proposed approach was compared with the two most common approaches discussed in the literature and used in real-world practice: single-stage optimization and prioritization rules. The results confirm that the double-stage approach outperforms the other two. Using the proposed model, a near optimal solution can be found relatively quickly after the natural hazard event occurs (in comparison with the single-stage approaches). This is significant because when investigating real-world scenarios, the possible solution space can become so large so quickly that it will not be possible to find the optimal result within a finite time. Additionally, the approach provides better solutions in comparison with prioritization rules—the benchmark model that is mostly implemented in practice. Consequently, this model brings an increase in performance and accuracy compared with current state-of-the-art solutions.Due to its efficiency, the proposed model can be applied in networks of any size and for a variety of infrastructure types as well as for different natural hazard events. It is also beneficial for infrastructure managers who are responsible for determining the infrastructure resilience to disruptive events. The model can provide estimations of the time required to recover the desired LOS following a disruptive event and provide insights into various possible restoration programs and the trade-offs between direct and indirect costs.Despite the demonstrated advantages of the proposed model, one must be aware that reality is much more complex than what is reflected in the model. This paper mainly focuses on the short-term impact of one or more local interruptions. Hence, only the immediate impacts on assets and traffic (i.e., extended travel time as well as impossible trips) are considered. Future studies should focus on developing a method that accounts for the life-cycle costs of decisions under uncertainty, as well as a probabilistic model that takes into account preexisting infrastructure conditions (i.e., the condition the infrastructure is in at the time of the natural hazard).Another limitation of the proposed restoration model is related to the traffic model, which is based on a static user equilibrium traffic assignment and cannot account for the features that more sophisticated traffic models consider. For example, in this model it is assumed that travelers are informed of traffic conditions and changes in travel patterns after a disruptive event are not considered. This traffic model was mainly used because it is computationally fast and does not require too many input parameters, which is beneficial for a heuristic approach. However, with the ability of the proposed double-staged approach in the rapid development of the initial restoration plans, it would be interesting to investigate how the efficiency of the model would be affected if other traffic models with more realistic representations of traffic flow are used. Consequently, a future step would be implementing a dynamic traffic assignment (DTA) model to represent dynamic traffic phenomena: queues, spillbacks, wave propagation, capacity drops, and the like.It should be kept in mind that in addition to data quality, the performance of the model is affected by the selected parameters, which are highly context-dependent. Also, due to the large solution space, it is hard and in most cases impossible to determine how close the generated solutions are to the global optimum.Nevertheless, this work represents a first, essential step in the field of risk-informed decision-making for complex infrastructure systems. Especially relevant to future resilient infrastructures is that the presented work forms the basis for numerous applied and scientific extensions. This includes investigating reinforcement learning or a hybrid heuristic algorithm in the proposed model and comparing the results with SA and PSO, or increasing the applicability of the proposed model by extending the scope of the study to interdependent networks as well as multihazard scenarios.References Adewole, A. P., K. Otubamowo and T. O. Egunjobi. 2012. “A comparative study of simulated annealing and genetic algorithm for solving the travelling salesman problem.” Int. J. Appl. Inf. Syst. 4 (4): 6–12. https://doi.org/10.5120/ijais12-450678. Adey, B. T., T. Hermann, K. Tsafatinos, J. Luking, N. Schindele, and R. Hajdin. 2012. “Methodology and base cost models to determine the total benefits of preservation interventions on road sections in Switzerland.” Struct. Infrastruct. Eng. 8 (7): 639–654. https://doi.org/10.1080/15732479.2010.491119. Adey, B. T., C. Martani, C. Kielhauser, I. Robles Urquijo, N. Papathanasiou, and M. Burkhalter. 2019. Guideline to measure level of service and resilience in infrastructures. FORESEE Deliverable D1.1. Zürich, Switzerland: ETH Zurich. Babazadeh, A., H. Poorzahedy, and S. Nikoosokhan. 2011. “Application of particle swarm optimization to transportation network design problem.” J. King Saud Univ. Sci. 23 (3): 293–300. https://doi.org/10.1016/j.jksus.2011.03.001. Bałut, A., R. Brodziak, J. Bylka, and P. Zakrzewski. 2019. “Ranking approach to scheduling repairs of a water distribution system for the post-disaster response and restoration service.” Water 11 (8): 1591. Beckmann, M., C. B. McGuire, and C. B. Winsten. 1955. Studies in the economics of transportation. New Haven, CT: Yale University Press. BPR (Bureau of Public Roads). 1964. Traffic assignment manual. Washington, DC: US Dept. of Commerce, Bureau of Public Roads. Buckle, I. G., I. Friedland, J. Mander, G. Martin, R. Nutt, and M. Power. 2006. Seismic retrofitting manual for highway structures. Part 1, Bridges. McLean, VA: Turner-Fairbank Highway Research Center. Çagnan, Z., and R. Davidson. 2004. “Post-earthquake restoration modeling of electric power systems.” In Proc., 13th World Conf. on Earthquake Engineering, 1–12. Vancouver, Canada: 13 WCEE Conference Secretariat. Carpaneto, G., and P. Toth. 1980. “Some new branching and bounding criteria for the asymmetric travelling salesman problem.” Manage. Sci. 26 (7): 736–743. Carrabs, F., R. Cerulli, and M. G. Speranza. 2013. “A branch-and-bound algorithm for the double travelling salesman problem with two stacks.” Networks 61 (1): 58–75. Chen, L., and E. Miller-Hooks. 2012. “Resilience: An indicator of recovery capability in intermodal freight transport.” Transp. Sci. 46 (1): 109–123. https://doi.org/10.1287/trsc.1110.0376. Chen, Y.-W., and G.-H. Tzeng. 1999. “A fuzzy multi-objective model for reconstructing the post-quake road-network by genetic algorithm.” Int. J. Fuzzy Syst. 1 (2): 85–95. Chen, Y.-W., and G.-H. Tzeng. 2000. “Determining the optimal reconstruction priority of a post-quake road network.” In Vol. 99 of Computing in civil and building engineering, edited by R. Fruchter, F. Peña-Mora, and W. M. Kim Roddis, 686–693. Reston, VA: ASCE. Clerc, M. 2004. “Discrete particle swarm optimization, illustrated by the traveling salesman problem.” In New optimization techniques in engineering, 219–239. Berlin: Springer. de Dios Ortuzar, J., and L. G. Willumsen. 2011. Modelling transport. New York: Wiley. Dorigo, M., and L. M. Gambardella. 1997. “Ant colonies for the travelling salesman problem.” Biosystems 43 (2): 73–81. ESPG (Escola Superior de Planejamento e Gestão). 2019. “Coordinate systems worldwide.” Accessed April 30, 2019. https://epsg.io/21781. FOSD (Federal Office for Spatial Development). 2015. Nationales Personenverkehrsmodell des UVEK, Aktualisierung auf den Basiszustand 2010. Ittigen, Switzerland: FOSD. FEDRO (The Federal Roads Office). 2015. Verkehrsentwicklung und Verfügbarkeit der Nationalstrassen. Ittigen, Switzerland: FEDRO. Geng, X., Z. Chen, W. Yang, D. Shi, and K. Zhao. 2011. “Solving the traveling salesman problem based on an adaptive simulated annealing algorithm with greedy search.” Appl. Soft Comput. 11 (4): 3680–3689. Gomez, C., and J. W. Baker. 2019. “An optimization-based decision support framework for coupled pre- and post-earthquake infrastructure risk management.” Struct. Saf. 77 (24): 1–9. Grefenstette, J., R. Gopal, B. Rosmaita, and D. Van Gucht. 1985. “Genetic algorithms for the traveling salesman problem.” In Proc., 1st Int. Conf. on Genetic Algorithms and Their Applications, 160–168. Mahwah, NJ: Lawrence Erlbaum Associates. Hackl, J., B. T. Adey, and N. Lethanh. 2018. “Determination of near-optimal restoration programs for transportation networks following natural hazard events using simulated annealing.” Comput.-Aided Civ. Infrastruct. Eng. 33 (8): 618–637. https://doi.org/10.1111/mice.12346. Halim, A. H., and I. Ismail. 2019. “Combinatorial optimization: Comparison of heuristic algorithms in travelling salesman problem.” Arch. Comput. Methods Eng. 26 (2): 367–380. https://doi.org/10.1007/s11831-017-9247-y. Hassanat, A., K. Almohammadi, E. Alkafaween, E. Abunawas, A. Hammouri, and V. B. Prasath. 2019. “Choosing mutation and crossover ratios for genetic algorithms—A review with a new dynamic approach.” Information 10 (12): 390. https://doi.org/10.3390/info10120390. Hayat, E., R. Haigh, and D. Amaratunga. 2019. “A framework for reconstruction of road infrastructure after a disaster.” Int. J. Disaster Resilience Built Environ. 10 (3): 151–166. https://doi.org/10.1108/IJDRBE-03-2017-0018. Henderson, D., S. H. Jacobson, and A. W. Johnson. 2003. “The theory and practice of simulated annealing.” In Handbook of metaheuristics, 287–319. New York: Springer. Horowitz, A. J. 1991. Delay-volume relations for travel forecasting: Based upon the 1985 Highway Capacity Manual. Madison, WI: Univ. of Wisconsin. Hu, F., C. H. Yeung, S. Yang, W. Wang, and A. Zeng. 2016. “Recovery of infrastructure networks after localised attacks.” Sci. Rep. 6 (1): 24522. https://doi.org/10.1038/srep24522. Hurkens, C. A. J., and G. J. Woeginger. 2004. “On the nearest neighbor rule for the traveling salesman problem.” Oper. Res. Lett. 32 (1): 1–4. Isumi, M., N. Nomura, and T. Shibuya. 1985. “Simulation of post-earthquake restoration of lifeline systems.” Int. J. Mass Emergencies Disasters 3 (1): 87–105. Lam, J. C., and B. T. Adey. 2016. “Functional loss assessment and restoration analysis to quantify indirect consequences of hazards.” ASCE-ASME J. Risk Uncertainty Eng. Syst. Part A: Civ. Eng. 2 (4): 04016008. https://doi.org/10.1061/AJRUA6.0000877. Liu, H., R. A. Davidson, and T. V. Apanasovich. 2007. “Statistical forecasting of electric power restoration times in hurricanes and ice storms.” IEEE Trans. Power Syst. 22 (4): 2270–2279. Liu, Y., S. McNeil, J. Hackl, and B. T. Adey. 2022. “Prioritizing transportation network recovery using a resilience measure.” Sustainable Resilient Infrastruct. 7 (1): 70–81. https://doi.org/10.1080/23789689.2019.1708180. Luna, R., N. Balakrishnan, and C. H. Dagli. 2011. “Postearthquake recovery of a water distribution system: Discrete event simulation using colored petri nets.” J. Infrastruct. Syst. 17 (1): 25–34. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000039. Malek, M., M. Guruswamy, M. Pandya, and H. Owens. 1989. “Serial and parallel simulated annealing and tabu search algorithms for the traveling salesman problem.” Ann. Oper. Res. 21 (1): 59–84. Miller, M. K. 2014. “Seismic risk assessment of complex transportation networks.” Ph.D. thesis, Dept. of Civil and Environmental Engineering, Stanford Univ. Mitradjieva, M., and P. O. Lindberg. 2013. “The stiff is moving—Conjugate direction Frank-Wolfe methods with applications to traffic assignment.” Transp. Sci. 47 (2): 280–293. https://doi.org/10.1287/trsc.1120.0409. Nagurney, A., and W.-B. Zhang. 2007. “Mathematical models of transportation and networks.” Math. Models Econ. 2: 346–384. Nilsson, C. 2003. Heuristics for the traveling salesman problem, 85–89. Linköping, Sweden: Linkoping Univ. Obama, B. H. 2013. Critical infrastructure security and resilience. San Jose, CA: Cyber Infrastructure. Orabi, W., K. El-Rayes, A. B. Senouci, and H. Al-Derham. 2009. “Optimizing postdisaster reconstruction planning for damaged transportation networks.” J. Constr. Eng. Manage. 135 (10): 1039–1048. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000070. Potvin, J. Y. 1996. “Genetic algorithms for the traveling salesman problem.” Ann. Oper. Res. 63 (3): 337–370. Ramachandran, V., S. K. Long, T. Shoberg, S. Corns, and H. J. Carlo. 2015. “Framework for modeling urban restoration resilience time in the aftermath of an extreme event.” Nat. Hazards Rev. 16 (4): 4015005. https://doi.org/10.1061/(ASCE)NH.1527-6996.0000184. Rosenkrantz, D. J., R. E. Stearns, and M. I. I. Lewis Philip. 1977. “An analysis of several heuristics for the traveling salesman problem.” SIAM J. Comput. 6 (3): 563–581. https://doi.org/10.1137/0206041. Safapour, E., S. Kermanshachi, and T. Jahan Nipa. 2020. “A damage-based analysis of rework in reconstruction of infrastructure projects due to natural disasters.” In Proc., Creative Construction E-Conf. 2020, 55–62. Budapest, Hungary: Budapest Univ. of Technology and Economics. Scott, D. M., D. C. Novak, L. Aultman-Hall, and F. Guo. 2006. “Network robustness index: A new method for identifying critical links and evaluating the performance of transportation networks. J. Transp. Geogr. 14 (3): 215–227. https://doi.org/10.1016/j.jtrangeo.2005.10.003. Tan, Y., F. Qiu, A. K. Das, D. S. Kirschen, P. Arabshahi, and J. Wang. 2019. “Scheduling post-disaster repairs in electricity distribution networks.” IEEE Trans. Power Syst. 34 (4): 2611–2621. Tao, W., and N. Wang. 2019. “Determination of optimum post-earthquake restoration strategies for highway bridges by Markov decision process.” In Proc., 13th Int. Conf. on Applications of Statistics and Probability in Civil Engineering (ICASP13). Seoul: Seoul National Univ. VSS (Schweizerischer Verband der Strassen- und Verkehrsfachleute). 2009a. Kosten-Nutzen-Analysen im Strassenverkehr: Betriebskosten von Strassenfahrzeugen. Zürich: VSS. VSS (Schweizerischer Verband der Strassen- und Verkehrsfachleute). 2009b. Kosten-Nutzen-Analysen im Strassenverkehr: Zeitkosten im Personenverkehr. Zürich: VSS. Vugrin, E. D., M. A. Turnquist, and N. J. K. Brown. 2010. Optimal recovery sequencing for critical infrastructure resilience assessment. Albuquerque, NM: Sandia National Laboratories. Vugrin, E. D., M. A. Turnquist, and N. J. K. Brown. 2014. “Optimal recovery sequencing for enhanced resilience and service restoration in transportation networks.” Int. J. Critical Infrastruct. 10 (3–4): 218–246. https://doi.org/10.1504/IJCIS.2014.066356. Wang, K. P., L. Huang, C. G. Zhou, and W. Pang. 2003. “Particle swarm optimization for traveling salesman problem.” In Proc., 2003 Int. Conf. on Machine Learning and Cybernetics, 1583–1585. New York: IEEE. Xu, N., S. D. Guikema, R. A. Davidson, L. K. Nozick, Z. Çağnan, and K. Vaziri. 2007. “Optimizing scheduling of post-earthquake electric power restoration tasks.” Earthquake Eng. Struct. Dyn. 36 (2): 265–284. Yan, S., and Y. L. Shih. 2012. “An ant colony system-based hybrid algorithm for an emergency roadway repair time-space network flow problem.” Transportmetrica 8 (5): 361–386. https://doi.org/10.1080/18128602.2010.515550.



Source link

Leave a Reply

Your email address will not be published.