### SSSC modeling

In Fig. 1, a single-machine infinite bus system with SSSC is presented. The voltage generated by the SSSC is controlled by the pulse-width modulation (PWM) method to ensure that the voltage is always in line with the *q* axis relative to the line current^{15}. Therefore, the SSSC can act as a capacitor or inductor in the system. The magnitude of the PWM controls the degree of compensation, and the phase angle determines the type of compensation: capacitive or inductive^{16}.

The PWM voltage-generation diagram is shown in Fig. 2. The magnitude of the PWM signal is obtained from multiplying the SSSC-measured AC by the AC generated in the reference reactance. The *X*_{sdelta} is calculated using the proposed controller in the fifth section. The phase angle of the PWM is obtained by the synchronous phase angle, and a proportional-integral (PI) controller keeps the DC voltage constant so that the active power exchange between the power system and the SSSC is zero. As a result, the phase angle generated by the diagram in Fig. 2 ensures that the voltage phasor and flow phasor of the transmission line are perpendicular to each other and that the SSSC behaves like a reactance.

### SSSC effect on small-signal stability

The ability of the power system to maintain the synchronism of synchronous generators during small disturbances is called small-signal stability^{18}. The single-machine infinite-bus system in Fig. 1 is considered for the analysis of small-signal stability in which the dynamics of the synchronous generator stator are ignored^{8}. Moreover, to maintain the simplicity, the dampers and resistances of all synchronous generator rotors are ignored. As a result, only a differential equation for the excitation circuit voltage plus two differential equations for the rotational motion of the synchronous generator remain, as expressed in (1).

$$begin{aligned} pomega_{r} & = frac{1}{2H}left( {T_{m} – T_{e} – K_{{Domega_{r} }} } right) \ pdelta & = omega_{o} omega_{r} \ pPsi_{fd} & = frac{{omega_{o} R_{fd} }}{{L_{adu} }}E_{fd} – omega_{o} R_{fd} i_{fd} \ end{aligned}$$

(1)

where (p) is the first-order derivation, (Delta omega_{r}) is rotor speed variations, (H) is inertial constant, ({T}_{m}) is mechanical torque, ({T}_{e}) is electromagnetic torque, ({K}_{D}) is mechanical damping coefficient, (delta) is rotor angle, ({omega }_{0}) is rated turbine speed, ({Psi }_{fd}) is excitation circuit’s flux linkage, ({R}_{fd}) is excitation circuit’s resistance, ({E}_{fd}) is the excitation output voltage, ({L}_{adu}) is non-saturation mutual inductance between the rotor and axis d of the stator and ({i}_{fd}) is Excitation current.

For a full presentation of the state model, ({T}_{e}) and ({i}_{fd}) must be expressed in terms of the state variables. For electromagnetic torque:

$$i_{fd} = frac{{Psi_{aq} – Psi_{ad} }}{{L_{fd} }}$$

(2)

where ({Psi }_{ad}) and ({Psi }_{aq}) are the flux linkage components between the stator and the rotor on the *d* and *q* axes, respectively and ({i}_{d}) and ({i}_{q}) are stator currents. The equation for the flux linkage of the excitation circuit is:

$$begin{aligned} Psi_{ad} & = – L_{ads} i_{d} + L_{ads} i_{fd} \ Psi_{aq} & = – L_{aqs} i_{q} \ end{aligned}$$

(3)

where ({L}_{fd}) is the self-inductance of the excitation circuit. Mutual flux linkage relations are also expressed in Eq. (4) such that ({L}_{ads}) is the interactive saturation inductance between the rotor and stator.

$$begin{aligned} Psi_{ad} & = – L_{ads} i_{d} + L_{ads} i_{fd} \ Psi_{aq} & = – L_{aqs} i_{q} \ end{aligned}$$

(4)

Stator voltage equations are also described in (5), where (e_{d}) and (e_{q}) are the voltage components of generator’s terminal on the *d* and *q* axes, respectively. For the terminal voltage in the rotor synchronous reference frame:

$$left. begin{aligned} e_{d} & = , L_{l} i_{q} – Psi_{aq} \ e_{q} & = – L_{l} i_{d} + Psi_{ad} \ end{aligned} right} Rightarrow E_{t} = e_{q} + je_{d}$$

(5)

where (L) is the stator leakage inductance. Because of how the turbine is connected to the infinite bus, the following relations govern:

$$begin{aligned} e_{d} & = – {text{X}}_{E} i_{q} + e_{bd} \ e_{q} & = {text{ X}}_{E} i_{d} + e_{bq} \ end{aligned}$$

(6)

The variable ({X}_{E}) is the equivalent reactance of the transmission line, and ({e}_{bd}) and ({e}_{bq}) are infinite bus voltage components. To analyse the small signal, the state equations of the system must be linearised. According to the method presented in^{16} and considering the effect of the SSSC on the small-signal stability, the variable (Delta {X}_{E}), which represents the change in the transmission line reactance, is defined in the linearization of the system. After the linearization of relations (5) and (6); rewriting the relations (2), (3) and (4); and inserting the block of the small-signal stability into (6); the single-machine infinite-bus (SMIB) system is obtained as presented in Fig. 3.

In Fig. 3, the conversion function excitation system between ({G}_{ex}(s)) and (G(s)) is the conversion function of the SSSC damper controller. The *K* constants are defined as ({k}_{1}) to ({k}_{6}) as in 8. If SSSC is responsive to rapid changes in ({X}_{delta}), then the red lines in the block diagram of the system indicate the impact of the SSSC. Coefficients ({K}_{X1}), ({K}_{X2}), and ({K}_{X3}) include:

$$begin{aligned} K_{X1} & = (n_{3} + n_{4} )(psi_{ado} + L_{aqs} i_{do} ) – (m_{3} + m_{4} )(L_{ads} i_{qo} + psi_{aqo} ) \ K_{X2} & = frac{{L_{adu} , L_{ads} }}{{L_{fd} }}(m_{3} + m_{4} ) \ K_{X3} & = frac{{e_{do} }}{{E_{to} }}(L_{l} + L_{aqs} )(n_{3} + n_{4} ) – frac{{e_{qo} }}{{E_{to} }}(L_{l} + L_{ads} )(m_{3} + m_{4} ) \ end{aligned}$$

(7)

and the coefficients ({n}_{3}), ({n}_{4}), ({m}_{3}), and ({m}_{4}) include:

$$begin{aligned} n_{3} & = frac{{e_{bdo} }}{D} \ m_{3} & = frac{{Psi_{fdo} frac{{L_{ads} }}{{L_{ads} + L_{fd} }} – e_{bqo} }}{D} \ m_{4} & = frac{{ – (L_{l} + X_{Eo} + L_{aqs} )(2X_{Eo} + L_{aqs} + L_{ads}^{^{prime}} )m_{3} }}{D} \ n_{4} & = frac{{(L_{l} + X_{Eo} + L_{aqs} )(2X_{Eo} + L_{aqs} + L_{ads}^{^{prime}} )n_{3} }}{D} \ end{aligned}$$

(8)

The participation of the controller of the SSSC compensator in increasing system damping occurs through the blocks ({K}_{X1}), ({K}_{X2}), and ({K}_{X3}). As shown in Fig. 3, the participation of ({K}_{X1}) is direct, but that of ({K}_{X2}) and ({K}_{X3}) is indirect through the first-order post-phase blocks. Based on reference^{2}, the effects of ({K}_{X2}) and ({K}_{X3}) are negligible relative to ({K}_{X1}), and when the SSSC damper controller is designed, the *G(s)* function adds a zero- or 180-degree phase shift, depending on the sign of ({K}_{X1}). For multimachine systems, relations (1) to (6) are transformed into matrix relationships and constants *K* are converted into a matrix or vector with the corresponding variables in different machines. The diagram in Fig. 3 remains valid, but for each block matrix, the variables between the turbines are coupled.

### SSSC damping controller

The SSSC damping controller is shown in Fig. 4. The low pass filter for removing high frequencies from variations in the signal uses the speed of the generator. The frequency range in small-signal stability is from 0.1 to 2 Hz; therefore, the cutoff frequency of the low-pass filter is considered as 10 Hz.

The washout filter also acts as a high-pass filter, which allows the high-frequency oscillations to pass through but eliminates the stable state and removes the damping blocks in the steady state. Therefore, these two filters ensure that the controller only responds to the frequencies in the studied range of small-signal stability.

### Intelligent random optimisation algorithms

Optimisation, random search, and evolutionary algorithms are new and efficient methods to find optimal solutions to problems^{19}. The randomness of these algorithms prevents them from being trapped in local optima. In practical optimisation problems, such as engineering design, organisational management, and economic systems, the focus is on obtaining optimal and general solutions. Many of these algorithms are inspired by biological systems; the firefly algorithm (FA) is one example.

#### Firefly algorithm

The FA was presented in 2005, and the theoretical bases for this algorithm were developed in 2006–2008^{20}. This algorithm searches for an optimal solution to the problem by modelling the behaviour of a set of fireflies. FA allocates values related to the location fitness of each firefly as a model for firefly pigments and updates their location in successive repetitions of the algorithm^{21}. The two main phases of the algorithm in each replication are updating the pigment and motion. Fireflies move to other fireflies in their vicinity that have more pigment. Accordingly, during successive iterations, the set tends to provide a better answer.

Mass intelligence, as it occurs in natural communities, is the result of actions that are carried out by individuals according to local information. Typically, the behaviour of the masses leads to more complex and massive targets. Examples of this phenomenon include ants, honeybees, birds, etc. The decentralised decision-making mechanisms in these and other natural species inspired the design of large-scale algorithms for solving complex problems such as optimisation, multi-criteria decision making, and robotics. In this section, an algorithm based on the firefly social behaviour has been investigated.

#### Initialising the fireflies

The FA is initiated by randomly placing an n-member population of fireflies at different points in the search space. Initially, all fireflies have the same amount of luciferin as member 1. Random values must first be selected for the independent variables of the problem. Each replication of the algorithm includes an update phase for luciferin and another for fireflies.

#### Updating luciferin

The amount of luciferin in each firefly is determined during each iteration, depending on the fitting of its location. Thus, during each iteration, a value is added to the current luciferin of each firefly according to the amount of fitness determined for that firefly. In addition, to model the gradual decrease of the residual value, the amount of current luciferin is reduced by a factor of less than 1. In this way, the relationship between luciferin updates is as follows:

$$ell_{i} (t) = (1 – rho )ell_{i} (t – 1) + gamma J(x_{i} (t))$$

(9)

where (li(t), li(t – 1)), and (J(xi(t))) are the new luciferin value, the previous luciferin value, and location fitness of firefly (i) in repetition (t) of the algorithm, and (rho) and (gamma) are fixed numbers for modeling the gradual decline and the effect of fitness on luciferin, respectively. At this stage, the fitness of each member of the population must be calculated. Accordingly, the fitness of each member of the algorithm population is the value of the objective function defined for the problem with the values of the independent variables attributed to the fitted firefly.

#### Firefly motion

During the motion phase, each firefly moves toward one of its neighbours with higher luciferin probabilistically. For each firefly, *i* probabilities of moving to a brighter neighbour *j* are defined as follows:

$$p_{ij} (t) = frac{{ell_{j} (t) – ell i(t)}}{{sum {_{{k in N_{i} (t)}} } ell_{k} (t) – ell_{k} (t)}}$$

(10)

where *N*_{i}(*t*) is the set of fireflies neighbouring firefly (i) at time (t), *d*_{ij}(*t*) is the Euclidean distance between the firefly (i) and (j) at time (t), and *r*_{di}(*t*) indicates the neighbouring range of the variable related to firefly (i) at time (t). Assuming that firefly (j) is selected by firefly (i) (with probability (p)), the discrete-time equation of the firefly can be written as follows:

$$x_{i} (t + 1) = x_{i} (t) + sleft(frac{{x_{i} (t) – x_{i} (t)}}{{left| {x_{i} (t) – x_{i} (t)} right|}}right)$$

(11)

where xi(t) is the m dimension vector of firefly i location at time t, the (left| {x_{i} (t) – x_{i} (t)} right|) operator shows the Euclidean norm, and s is the hop size.

#### Updating the neighbourhood range

By assuming ({R}_{o}) is the initial neighbourhood range for each firefly, the neighbourhood range of each firefly is updated during each iteration of the algorithm as follows:

$$r_{i}^{d} (t + 1)min left{ {r_{s} ,maxleft{ {0,r_{i}^{d} (t) + beta left( {n_{t} – left| {N_{i} (t)} right|} right)} right}} right}$$

(12)

where (beta) is a constant parameter and ({n}_{t}) is a parameter for controlling the number of neighbours.

#### Stop criterion

If the stop criterion for the algorithm is not met, then the algorithm will perform another iteration. Of course, the stop criterion can be defined as a fixed number of iterations to reduce the speed and precision of the estimation of (K) variables. Each firefly with the highest fitness value is considered as the output of the algorithm.

It should be noted here that global optimization approaches have regularly shown improper slow convergence rates because of their random search, particularly near the area of the global optimum. The firefly algorithm may not find the actual optimal if it started from a different initial condition. Consequently, it is needed to investigate the effect of this uncertainty in the damping performance. In this regard, the hybrid algorithms can benefit from the advantages of both methodologies and alleviate their inherent disadvantages are of interest. Future studies on the current topic are therefore recommended. Using this approach, during the initial optimization stages, the hybrid algorithm starts with an algorithm to find a near-optimum solution and accelerate the convergence speed. The searching process is then switched to FA and the best solution found by another algorithm will be taken as the initial starting point for the FA and will be fine-tuned. In this way, the hybrid algorithm may find an optimum solution more quickly and accurately.

### Harmony search algorithm

Over the past decade, to overcome the computational deficiencies of mathematical algorithms, evolutionary or metaheuristic algorithms, such as annealing and genetic algorithms, have been invented and simulated. However, searching for more powerful algorithms remains a challenge for engineers. The harmony search (HS) algorithm is a powerful search algorithm for finding the optimal answer^{22}.

In composing music, several musicians collaborate with different instruments. Their goal is to produce beautiful music. During this collaborative process, each musician attempts to choose the best music performance each time to create better music. The beauty of music improves during collaboration. Typically, an attempt is made to evolve the music at each stage so that harmony is created between musicians.

Over time, the musicians produce a musical piece by playing different harmonies. After playing several pieces, the musicians recall the pieces they have played (the harmonies of that piece). Suppose that there are (K) harmonies composed by *n* musicians, and it is assumed that the size of each musicians’ memory (HMS) is equal to (K) harmony. Therefore, according to the following equation, a matrix with k rows (the number of memorised harmonies) and (n + 1) columns, in which n is the number of musicians (the number of variables affecting the problem), and a column for the value of that harmony considering the fitness function is considered. The matrix is called the HM, or the harmony memory.

This algorithm consists of five steps:

- (1)
Initialising the optimisation problem and initial parameters

- (2)
Initialising harmony memory

- (3)
Creating a new, improved harmony

- (4)
Updating harmony memory

- (5)
Repeating steps 3 and 4 until the final condition is satisfied or the desired number of iterations is completed

- (6)
Optimising the control parameters of the SSSC damper.

To increase the damping of the studied system, two filters and a pre-phase/post-phase compensator block are located in the designed controller. The coefficients and time constants of the low-pass filter blocks and the washout filters are determined according to the studied frequency domain. Therefore, the cutoff frequency of the low-pass filter is considered as 10 Hz, and the test range in the small-signal stability is 0.2–2 Hz. The time constant of the phase compensator and compensator gain are considered as optimisation parameters. The optimal values of these parameters make it possible for the SSSC damper controller to have the maximum effect in eliminating the low-frequency oscillations of the power system. Thus, the optimisation vector is defined as follows:

$$X_{POP} = [K_{e} {text{ T}}_{1} {text{ T}}_{2} ]$$

(13)

This vector is the input to the optimisation algorithm. A reasonable range for the search space is determined by the change in the variables of the algorithm. By maintaining the integrity of the problem, the following intervals are expressed for the input vector:

$$begin{gathered} {text{o}} le T_{C1} le 1 hfill \ {text{o}} le T_{C2} le 1 hfill \ 1 le K_{e} le 20 hfill \ end{gathered}$$

(14)

To increase the damping of the system and reduce inter-area power oscillations, two control signals are considered. The first signal is the rotor angle variation of the generators, and the second signal is the power variation in the inter-area transmission line. Therefore, for each of these control signals, a proportional objective function is defined that ITAE criterion is considered in designing these functions.

$$begin{gathered} OF_{1} = ITAE = intlimits_{o}^{t} {tleft| {Delta omega_{r} } right|} , dt hfill \ OF_{2} = intlimits_{o}^{t} {tleft| {Delta P_{12} } right|} , dt hfill \ end{gathered}$$

(15)

where ({Delta P}_{1, 2}) is the inter-area power oscillation in the transmission line^{23}. The purpose of using intelligent algorithms is to minimise the defined objective functions according to the range of variation in the control variables^{24}, and minimising each objective function equals the maximum damping of the system and the minimum number of small-signal oscillations.