Identifiability of structural networks of nonlinear electronic oscillators


Identifiability of networks of dynamical systems

The advantage of using electronic circuits to study identifiability is that the whole system can be configured in a desired way and all variables can be accessed, two facts that are imposible in the context of neuroscience. For this reason, it is important to detail all steps made during the construction of the system to be studied and how identifiability was measured. Figure 1 describes the procedure we followed to determine whether a network is identifiable given a group of different structures. First, the dynamics of a set of M structural networks were recorded ((M=3) in the example of Fig. 1A). We called “test” to the set of first measurements of the time series of all dynamical units of each structural network. Second, the (weighted) adjacency matrices of the corresponding functional networks were obtained by quantifying the synchronization between oscillators. Third, the dynamics were recorded again and the second step was repeated in order to have a set of “re-test” functional networks. Finally, the test and re-test functional networks are compared in order to decide whether a structural network can be recognized from the comparison of its functional networks, using the previous observation (i.e., test) as the identification key (Fig. 1B). Two different scenarios may arise (see Fig. 1C): (1) functional networks of the same structural network have high similarity between them but a low one compared with the rest, which indicates that the system is identifiable (Fig. 1C, first case) or (2) functional networks of the same structural network have the same level of similarity as compared to functional networks from other structural networks, which is the signature of an unidentifiable system. Note that in the latter case, the reason can be either all functional networks have a very low similarity between them (Fig. 1C, second case) or, on the contrary, the similarity is very high in all cases, even when comparing two functional networks obtained from different structures (Fig. 1C, third case).

Experimental setup

Figure 1

Schematic representation of the identifiability phenomenon. (A) The dynamics of the nodes of three different structural networks (i, ii, and iii) are recorded (“test”). (B) The coordination between all pairs of nodes is quantified, and the corresponding functional networks are obtained. In the figure, we plot the matrices containing the weights of the links of the functional networks. Next, a second measurement is carried out (“re-test”), and the corresponding functional network is compared with the previous ones. (C) All test (rows) and re-test (columns) functional networks of each structure (i,ii and iii) are compared by pairs. The elements of the matrix contain the correlations (p) between each pair of functional networks. Identifiable systems are those whose diagonal has much higher values than the off-diagonal ones (first matrix of the plot). On the contrary, when all values are low (second matrix) or high (third matrix), the identification is not possible.

We carried out a series of experiments using nonlinear electronic circuits to test how identifiability is related to the level of synchronization of a functional network and its robustness against noise. We constructed (M=20) structural networks composed of (N=28) diffusively coupled electronic Rössler oscillators24. Equations of the dynamical systems, together with the values of their electronic components (Table 1) and schematic diagrams (Figs. 7, 8) are detailed at “Methods” section. Rössler oscillators were set to have chaotic dynamics in order to better observe the transition from unsynchronized to fully synchronized dynamics. All structural networks had the same number of nodes and degree distributions. The only difference is that we reshuffled the connections between oscillators in order to have the 20 different structures (see “Methods” for details about the structural networks). Figure 2 has a schematic description of the experiment. We used an electronic array (EA), a personal computer (PC), a data acquisition card (DAQ) composed of 28 analog-to-digital converters (ADCs) and 2 digital-to-analog converters (DACs) to record and control the dynamics of the networks. The EA comprised the 28 Rössler electronic oscillators and their corresponding electronic couplers, which sent the dynamics of the oscillators to their outgoing neighbors and, at the same time, collected the inputs from their incoming neighbors. The 28 analog ports (AI0–AI27) acquired the signal of each oscillator. The coupling strength (kappa) of the whole network was controlled by two digital potentiometers (XDCP), which were tuned by the signals coming from digital ports P0.0 and P0.1 (DO). During the experiment, we did not add noise to the oscillators, since it will be included directly in the functional networks.

Figure 2
figure2

Experimental setup. Schematic representation of the experimental arrangement of a network containing (N=28) electronic Rössler oscillators. The coupling strength between oscillators ((kappa)) is adjusted by means of digital potentiometers X9C103 (XDCP), whose resistance is controlled through digital pulses sent by a DAQ (model NI USB 6363, from National Instruments). Port P0.0 is used to increase or decrease the resistance of the digital potentiometer, and port P0.1 sets the initial value (allowing for 100 discretized values of (kappa)).

Estimating differential identifiability

For each of the (M=20) structural networks, we recorded test and retest dynamics (two acquisitions) of the (N=28) oscillators for coupling strength configurations ranging from a coupling strength (kappa =0) to a value of (kappa =1). This guaranteed achieving synchronization of the ensemble for all structural networks. For each value of (kappa), we obtained the corresponding two functional networks (test and retest) of a given structural network by computing the phase synchronization between each pair of oscillators. For each node j, the instantaneous phase (phi _j(t)) was calculated from the Hilbert transform25,26 of the time series of its variable (v_2) (see equations at the “Methods” section for the definition of the Rössler variables). Next, we quantified the phase synchronization between phases (phi _j(t)) and (phi _k(t)) each pair of oscillators j and k, obtaining the phase synchronization between the two oscillators as (w_{j,k}=|e^{i[phi _j(t) -phi _k(t)]}|), where (| cdot |) indicates temporal averaging. Finally, the average synchronization of the whole network was obtained as (r=frac{1}{N(N-1)}sum _{i,j} w_{j,k}), where (j ne k).

For each coupling strength (kappa), the values of (w_{j,k}) were used as the elements of the weighted adjacency matrix (mathbf{A}(m,kappa )) associated with each functional network, with (m=1, 2, ldots , 20) being the number of the underlying structural network. We repeated the experiment to have a test and re-test set of functional networks, named (mathbf{A}(m,kappa )) and (mathbf{A}^*(m,kappa )), respectively. On the other hand, we also obtained the adjacency matrices of the underlying structural networks (mathbf{S}(m)), which were independent of the coupling parameter (kappa).

Once the test and re-test functional networks were obtained, we calculated, for each coupling strength k, the Pearson correlation coefficient (p_{ij}) between all pairs i and j of the set of M functional networks, where i belongs to the “test” functional networks and j to the “re-test” ones. Finally, we constructed the identifiability matrix ({mathcal {I}}), which consists of a (M times M) matrix whose elements are directly the values of (p_{ij}). Note that ({mathcal {I}}) is symmetric since (p_{ij}=p_{ji}). Also note that the m element of the matrix diagonal (i.e., (i=j=m)) contains the Pearson correlation coefficient of the structural network m when the “test” and “re-test” functional networks are compared, quantifying how similar functional networks are for a given structure.

The identifiability matrix ({mathcal {I}}) contains useful information about how reproducible the functional network of a given structure is and, at the same time, how different it is from the functional networks supported by other structures. Therefore, we calculated the self-identifiability (I_{self}) as the average of the values of diagonal of ({mathcal {I}}), which is an indicator of how similar functional networks of a given structure are when they are re-tested. We also obtained (I_{others}), which is the average of the off-diagonal elements of the identifiability matrix. In this case, (I_{others}) measures how similar functional networks obtained from two different structures are (in average). Note that the lower (I_{others}), the more identifiable a structure is within the set of M different structures.

Finally, we obtained the differential identifiability (I_{diff}) by comparing how different is (I_{self}) from (I_{others})12:

$$begin{aligned} I_{diff}= (I_{self}-I_{others}) times 100 end{aligned}$$

(1)

The differential identifiability (I_{diff}) indicates to what extent it is possible to distinguish a given network structure from a set of M networks just by analyzing the organization of their corresponding functional networks. From now on, the differential identifiability (I_{diff}) will be our indicator of the systems’ identifiability.

Functional vs. structural networks in a noisy scenario

Previously to the identifiability analysis, we investigated the interplay between functional networks and their underlying structures. Figure 3A shows the average synchronization parameter r of all structural networks as a function of the coupling parameter (kappa). We can observe a smooth path to synchronization as (kappa) is increased. The shadowed region of Fig. 3 indicates the standard deviation of the values of r for all structural networks. As we can see, the largest standard deviation is reported when r has a higher increase, i.e., during the transition from the unsynchronized to the synchronized manifold.

Figure 3
figure3

Interplay between functional and structural networks. In (A), the average phase synchronization r of the whole set of structural networks is plot as a function of the coupling strength (kappa). The shadowed region is bounded by the standard deviation of the values of r, with the solid line being the average. In (B), the average correlation between functional and structural networks (Co(mathbf{A}^{noise},mathbf{S})), also called structural-functional correlation, is plot as a function of the coupling parameter (kappa) and the level of noise of the functional networks (xi).

Next, we perturbed the elements of all functional networks with a (Gaussian) white noise term of amplitude (xi), in order to obtain a set of noisy functional networks (mathbf{A}^{noise}(m,kappa ,xi )). In such a way, we are accounting for the unavoidable presence of noise that exists in real experiments, which has consequences on the estimation of functional networks, as it is the case, for example, of brain27 or molecular28 networks. We have two reasons for introducing noise into the elements of the functional network and not directly into the electronic circuits. On the one hand, we did not have access to 28 electronic noise generators (one for each oscillator). On the other hand the consequences of introducing noise directly into the time series are difficult to be interpreted. The reason is that we are concerned about the organization of functional networks and not the dynamics in them. Increasing the noise introduced into the oscillators does not linearly change the structure of the functional networks. In fact, the effects of noise are highly nonlinear and depend on the kind of dynamical system we are implementing and the number of connections of each particular node. For this reason, we preferred to modify directly the elements of the functional matrix. In this way, we assure that the organization of the functional network is changing as the values of the noise amplitude are increased. Furthermore, the effects of noise on functional networks will not depend on the kind of metric (phase synchronization) we used to quantify synchronization. In Fig. 3B, we plot the results of calculating the correlation (Co(mathbf{A}^{noise},mathbf{S})) between the matrices of the noisy functional networks (mathbf{A}^{noise}(m,kappa ,xi )) and the underlying structural ones (mathbf{S}(m)). We can observe how there exists a region of values of (kappa) where the correlation between functional and structural matrices is maximized. This is the most convenient scenario to estimate the network structure based on the observation of its dynamics. Note, that this region arises for relatively low values of the coupling strength (kappa) (around (kappa sim 0.075)). The reason is that high values of (kappa) lead to a high synchronization of the majority of oscillators of the network, no matter if they are structurally linked or not, introducing spurious functional links in the estimation. As a consequence, the identification of structural networks based on the observation of node dynamics relies on the existence of a partially incoherent state of the system.

Concerning the effect of the noise amplitude (xi), we can observe an impairment of the correlation in all cases. However, it is always around (kappa sim 0.075) where the correlation is higher compared to other coupling strengths.

Identifiability vs. noise and coupling strength

But, how does the identifiability of the group of networks depend both on the coupling strength and the level of noise? To answer this question, we show in Fig. 4A the differential identifiability (I_{diff}) of the M networks as a function of the coupling parameter (kappa) and the level of noise (xi). We can observe that (I_{diff}) is high for scenarios with low noise amplitudes, and it is completely lost for high noise levels (in this particular case, for (xi > 0.6)). Concerning the dependence on the coupling parameter, Fig. 4A shows that when (kappa) was increased from zero, (I_{diff}) increased monotonically until reaching a maximum around (sim 0.15). If coupling was increased above this value, identifiability decreased again. The reason is that, as we have seen in the previous section, high values of coupling led to a scenario close to the complete synchronization of the functional network, introducing a high amount of spurious functional links, which, in turn, resulted on very homogeneous networks, no matter what the underlying structure is. As a consequence, all functional networks looked similar, and it was difficult to distinguish them.

Figure 4
figure4

Identifiability as a function of the coupling strength (kappa) and the noise amplitude (xi). In (A), the differential identifiability (I_{diff}) is plot vs. (kappa) and (xi). In (B), we apply PCA to obtain (I_{diff}^{PCA}). Finally, in C), we plot the improvement of identifiability when PCA is applied, which is measured as (I_{diff}^{PCA} – I_{diff}).

Going back to (kappa sim 0.15), we can see that, around this value, the network identifiability was especially robust against noise. Only when (xi > 0.2), (I_{diff}) began to decrease, having a reduction around (50%) when (xi sim 0.4). If we revisit Fig. 3A, we can observe how this particular value of coupling ((kappa sim 0.15)) was close to the region where networks were beginning to synchronize, having low values of the order parameter r. Therefore, as it happened in the case of inferring the network structure from the observation of its dynamics, low values of the coupling parameter were the most adequate to promote, in this case, the identifiability of structural networks based on their dynamics.

Identifiability is increased using the differential identifiability framework

It is worth noting that identifiability strongly depends on the way of quantifying the similarity between functional networks. Regarding to this point, Amico et al.12 demonstrated that introducing principal component analysis (PCA) before comparing functional networks reduced the effect of the intrinsic noise of the system, increasing the level of identifiability. Therefore, we followed the same methodology as12 and compared each pair of functional networks using only the set of the l principal components that maximized the correlation between functional networks. The procedure was as follows: (1) for each coupling strength (kappa) assessed, we carried out a group-level PCA decomposition including all test/retest functional networks as obtained from the (M=20) structural networks (hence the dimensionality of the data and the number of principal components being (2M=40)), (2) we calculated (I_{diff}(l)) between the matrices containing only the first l components (in descending order of explained variance), with (1 le l le 2M), and (3) we identified the number of principal components (l_{max}) that maximized the value of (I_{diff}(l)) and hence uncover more functional fingerprints of the circuits. In this way, we obtained the optimal differential identifiability as (I_{diff}^{PCA}= max(I_{diff}(l))), with (l=1,2,ldots ,2M) being the number of principal components. Using this methodology, we can asses the levels of differential identifiability achieved as we include more and more components and find the optimal number where (I_{diff}) is maximum. This raises the following question: are we effectively improving the identifiability of this set of circuit-based structural networks? Results shown in Fig. 4A show (I_{diff}) values on the original data (i.e. no PCA decomposition/reconstruction applied) for a wide range of coupling strength (kappa) and noise amplitude. Figure 4B shows the analogous assessment when reconstructing the functional networks, at each ((kappa , xi)) configuration with the number of components that maximized (I_{diff}) (i.e., (I_{diff}^{PCA})). It can be observed that the qualitative behavior was similar to the results obtained on the original data. However, both the maximum value (I_{diff}^{PCA}) and the region where this value was achieved were larger. For couplings close to (kappa sim 0.15), high values of differential identifiability were maintained even for regimes of moderate noise ((xi sim 0.4)).

Figure 4C combines both results by calculating the improvement ((I_{diff}^{PCA} – I_{diff})) achieved when applying PCA. Interestingly, the highest improvement corresponded to regions of moderate noise where, at the same time, the amount of coupling was low.

Finally, we investigated the interplay between inferring structural networks from the observation of their dynamics and the differential identifiability. As we have seen in Fig. 3B, the highest correlation between the structural and functional networks (Co(mathbf{A}^{noise},mathbf{S})) was obtained for a coupling parameter (kappa sim 0.075). In Fig. 5A we chose this particular value of (kappa) and plot (1) the correlation between the structural–functional networks and (2) the optimal differential identifiability as a function of the noise amplitude (xi), respectively (Co(mathbf{A}^{xi },mathbf{S})) and (I_{diff}^{PCA}(xi )). Vertical dashed lines indicate the value of noise amplitude at which both the structural–functional correlation and the optimal identifiability began to decrease. Note how the structural–functional correlation is more affected by low-to-moderate noise amplitudes, beginning its declining much faster than identifiability, which held its initial value up to (xi sim 0.4). We normalized both parameters dividing their values by their means, obtaining the normalized identifiability (I_{norm}=frac{I_{diff}^{PCA}(xi )}{ langle I_{diff}^{PCA} rangle }) and the normalized structural–functional correlation ((Co_{norm}=frac{Co(xi )}{ langle Co(xi ) rangle })). Next, we calculated (I_{norm}-Co_{norm}). The inset of Fig. 5A shows how the maximum difference between both normalized parameters corresponded to intermediate noise amplitudes ((xi sim 0.5)). In other words, for situations with intermediate values of noise, the correlation between structural and dynamical networks decreases, however, the identifiability parameter better maintains the values reported without noise sources. Figure 5B shows the behaviour of the same variables as a function of the coupling strength (kappa). In this case, we set the noise amplitude to (xi =0.18) and analyze how the correlation (Co(mathbf{A}^{kappa },mathbf{S})) between the structural and functional networks changes (Co(mathbf{A}^{kappa },mathbf{S})), paying attention at the same time to the value of (I_{diff}^{PCA}(kappa )). We observe how (I_{diff}^{PCA}(kappa )) overcomes (Co(mathbf{A}^{kappa },mathbf{S})) for a wide range of coupling strengths. Also note that very low and very high values of (kappa) have negative consequences on (1) the correlation between the structural and functional networks and (2) identifiability. In this way, a maximum appears around (kappa =0.05) for both variables. As we can observe in Fig. 5B, both peaks are very close to each other. However, the peak of the inedibility is reached first, i.e., lower coupling strengths are needed to achieve the highest identifiability. Furthermore, identifiability better resists the inconveniences of increasing the coupling strength after the peak. This fact can be better observed at the normalized values (Co_{norm}=frac{Co(kappa )}{ langle Co(kappa ) rangle }) and (I_{norm}=frac{I_{diff}^{PCA}(kappa )}{ langle I_{diff}^{PCA} rangle }), whose difference is plot in the inset of Fig. 5B. (I_{norm}-Co_{norm}) reveals that the normalized identifiability is higher than the structural–functional correlation for moderate values of the coupling strengths ((0.1< kappa < 0.56)).

Figure 5
figure5

Synchronization vs. identifiability. In (A), we show the correlation between the structural and functional networks (Co(mathbf{A}^{xi },mathbf{S})) (blue line, values on the left vertical axis) and the optimal differential identifiability (I_{diff}^{PCA}) (orange line, values on the right vertical axis) as a function of the noise amplitude (xi) for a fixed coupling parameter (kappa =0.075). In the inset (A), we plot the differences between the normalized values (I_{norm}-Co_{norm}). Dashed lines indicate the values of (xi) at where (Co(mathbf{A}^{xi },mathbf{S})) and (I_{diff}^{PCA}) begin to decrease significantly. In (B), we plot the same variables as a function of the coupling strength (kappa). In this case, we set the noise amplitude to (xi =0.18).



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *