IntroductionAutomated driving technology is the core technology for the development of intelligent vehicles. In 2021, the United Nations Economic Commission for Europe (UNECE) R157 became the world’s first AV certification regulation, marking the entry of intelligent vehicles into the first year of autonomous driving with level 3. The Mercedes-Benz EQS received the world’s first L3 autonomous driving certification issued by the German Federal Motor Transport Administration; this means that the Mercedes-Benz EQS has officially opened the intelligent era of autonomous driving. In the future market, AVs will start to have various product features that are certified for Level 3 autonomous driving regulations. It means that an increasing number of AVs on the road can continuously perform a full range of dynamic driving tasks, such as environmental sensing, decision planning, and driving operations under the operating conditions for which the system is designed; human drivers are still required to maintain attention at all times but do not need to operate the vehicle. However, in case of an emergency, the human driver is required to take over the vehicle (SAE 2021). However, higher level autonomous driving technology is still in the early stages of commercialization.In the process of advanced assisted driving toward advanced autonomous driving, user acceptance of automated driving technology and safety perceptions have become the main factors limiting the commercialization of AVs. In recent years, AVs have been involved in several traffic accidents, and the acceptance of autonomous driving technology has declined. For example, in 2019, a survey by the American Automobile Association (AAA) found that 71 percent of US drivers said they were afraid of riding in a fully autonomous vehicle; this was essentially unchanged from 78% in an early 2017 survey and 73% in a 2018 survey (AAA 2019). However, this was a significant increase from 63% in a survey conducted at the end of 2017 and was consistent with the findings of Lienert (AAA 2018; Lienert 2018).In addition, China’s State Administration of Market Regulation reported 29 recalls related to intelligent vehicle systems and functions involving 485,000 vehicles in the last five years; this reflects a rapid upward trend (Gmw 2020). It is evident that with the rapid development of autonomous driving technology and the complex and variable nature of the traffic environment, the number of safety incidents and vehicle recalls for AVs continues to climb. This directly affects users’ trust in the safety of self-driving technology and weakens people’s willingness to purchase and use self-driving cars (Kang et al. 2020).Therefore, it is necessary to understand the reasons for the decline in user acceptance of AVs following a spate of AV accidents. In this context, it is important to explore potential risk factors affecting AV acceptance in terms of autonomous driving safety and how to improve technical acceptance for the automation upgrade and wider adoption of AVs.Literature ReviewSafety Risk Perception Associated with AVsThe interpretation and prediction of the acceptance of higher-level AVs has received significant research attention from both academics and the automotive industry. Scholars have found that perceptual factors of autonomous driving technologies, such as perceived ease of use, perceived usefulness, perceived safety, and perceived risk, have significant effects on user acceptance (Gkartzonikas and Gkritza 2019; Zoellick et al. 2019; Moody et al. 2020). In recent years, AV safety has become a key indicator of concern influencing user intention to adopt AVs following reports of traffic accidents involving AVs. Previous studies have shown that the safety risks of AVs and their scope are increasingly valued by users (Karnouskos 2021; Perello-March et al. 2022).The majority of respondents in previous studies have ranked self-driving vehicle safety risks as their most important concern (Gold et al. 2015; Bansal et al. 2016; Shin and Managi 2017; Kaur and Rampersad 2018; Liu et al. 2019; Ha et al. 2020). In analyzing previous studies, it was found that the perceived risks hindering the adoption of autonomous driving technologies can be grouped into five categories: economic risks, cyber security, information privacy disclosures, autonomous driving system risks, and weather and terrain risks. These five categories of risks and their corresponding main issue items are summarized in Table 1. Existing research has focused on the role of safety risks related to connected vehicles, such as cyber security and information privacy leakages, as well as on the reliability of single-vehicle intelligence and other safety risks affecting technology adoption.Table 1. Summary of studies relevant to risk perception for AVsTable 1. Summary of studies relevant to risk perception for AVsRisk perception typeMain factorsLiteratureEconomic risksIncrease in initial costBansal et al. (2016), Talebian and Mishra (2018), Acheampong and Cugurullo (2019), and Deng et al. (2020)Higher than expected maintenance costsLegal liability of the driver or owner of the AVCyber-security risksHacking of the vehicle’s computer systemGold et al. (2015), Bansal et al. (2016), Liu et al. (2019), Talebian and Mishra (2018), Acheampong and Cugurullo (2019), and Chikaraishi et al. (2020)Risk of failure due to operating system crashesRisk of failure due to virus attacksRisk of failure due to internet disconnectionInformation privacy disclosureTrack recordsGold et al. (2015), Bansal et al. (2016), and Liu et al. (2019)Share my personal information with other entitiesSurveillanceAutomated driving system risksPossible traffic accidents caused by technical failuresShin and Managi (2017), Kaur and Rampersad (2018), Zhang et al. (2019), Chikaraishi et al. (2020), Ha et al. (2020), and Pascale et al. (2021)The probability of a software failure or software error eventProbability of a hardware or electronic failureVehicle motion control riskWeather and terrain risksAVs can get into unexpected situations in bad weather conditionsLiu et al. (2019) and Deng et al. (2020)AVs can have accidents in special terrainCannot cope with various weather conditions and terrainBut few studies have explained the decline in technology acceptance as autonomous vehicle accidents have occurred. Increased autonomous vehicle accidents may lead to changes in public opinions and attitudes toward autonomous driving technology. AV accidents can create public misconceptions and antipathy toward self-driving technology (Sinha et al. 2021). Analyzing accidents involving AVs, this study found that driver misconduct accounted for a certain percentage of accidents in autonomous driving mode. However, there has been a lack of research on the safety of human manipulation associated with autonomous driving in the operation of conditional (Level 3) and highly automated (Level 4) AVs.Based on the six levels of vehicle automation defined by the Society of Automotive Engineers (SAE), AVs with conditional (Level 3) and highly (Level 4) autonomous driving are defined as vehicles in which key features regarding safety and driving tasks are automated; drivers are able to transfer control and operation of driving tasks to the system in limited scenarios. The implication is that the safety of AVs may change significantly as they progress through the different levels. Even at the same level of automation, the functional architecture of AVs may vary. In summary, human–machine collaborative driving is fundamental to the safety assurance of higher-level AVs. The need for drivers to be able to accurately judge and correctly operate level 3 autonomous driving functions in relation to the road and traffic environment and to complete the transfer and take over of control of a vehicle is a significant challenge for AV safety. Therefore, it is necessary to understand the impact of risks related to human driver manipulation of automated driving systems on user acceptance in order to find the real reasons for the decline in technology acceptance and to provide effective behavioral intervention methods for improving technology acceptance.Constructs from Technology Acceptance Model and Theory of Planned BehaviorIn the automotive field, technology acceptance model (TAM) and theory of planned behavior (TPB) are the two main theories that are generally accepted and widely used in explaining and predicting the acceptance of technology systems. TAM, proposed by Davis et al. in 1989, was one of the first theories on technology acceptance, developed early on in the information technology field (Davis et al. 1989). The original TAM model consisted of four constructs: perceived usefulness (PU), perceived ease of use (PEOU), behavioral attitude (BA), and behavioral intention (BI). The two metrics PU and PEOU were proposed for evaluating the technology itself from the user’s perspective; these two have a direct and/or indirect positive impact on BI. Moreover, TAM has shown good performance when applied to AV acceptance studies (Rahman et al. 2017; Panagiotopoulos and Dimitrakopoulos 2018; Liu et al. 2019; Zhang et al. 2021).TPB is the most influential extension model of theory of reasoned action (Ajzen 1991). It is an important theory that contributes to the understanding of the influence of psychological factors on behavior and explaining behavioral change and reveals the process of making rational decisions about the evaluation of actual actions. TPB consists of four constructs: perceived behavioral control (PBC, the perceived ease of performing a particular behavior), subjective norm (SN, the perceived social pressure to adopt a particular behavior or not), BA (i.e., overall assessment), and BI (i.e., a direct predictor of behavior). Among these constructs, BI is significantly and positively predicted by PBC, SN, and BA. TPB has been found to be able to explain, predict, and intervene in the occurrence of a phenomenon through the relationship between behavioral intention and actual behavior (Shalender and Sharma 2021). In addition, the validity of applying TPB to explain the process of external factors influencing the acceptance of AVs is supported by many previous studies (Rahman et al. 2017; Buckley et al. 2018; Acheampong and Cugurullo 2019; Gunawan et al. 2022).Based on the foregoing analysis, this study aimed to explain the reasons for the decline in user acceptance of AVs following a spate of AV accidents and to improve acceptance of AVs. In the context of human–machine collaborative driving, this study proposed a new factor, human-manipulated risk perception (HMRP), related to the safety of AVs. The relationship between HMRP and AV acceptance and the mechanism of HMRP’s influence were explored. Then, a hybrid model of mediation and moderation was designed to test the effectiveness of the proposed behavioral intervention approach on improving the acceptance of AVs.MethodsConstruction of the Improved ModelTo explain the role of human-manipulated risk perception on the acceptance of AVs and the process of the corresponding impact, an improved acceptance model for AVs was constructed by fusing TAM, TPB, and human-manipulated risk perception (Fig. 1). The PU and PEOU constructs of TAM were used to characterize the perceived benefits and ease of operation of AVs, respectively. Together with the human-manipulated risk perception variable proposed in this study, these three perceptual constructs formed a benefit-risk perception module to characterize AVs. Next, the BA, SN, and PBC constructs of TPB were added to explain the process by which the foregoing perceptual factors influence technology acceptance. Because higher-level autonomous driving technology is not yet widespread, making the actual use of the system untestable, BI was used in this study (Sun et al. 2021; Detjen et al. 2021). Behavioral intention is the willingness to take a certain action or behavior and is the main influencing factor in determining the occurrence of behavior (Jung and Kim 2021).Because the original TAM and TPB models are generic theoretical models oriented to general technologies, the constructs of these two theoretical models cannot be directly used to explain phenomena related to the unique properties of AVs. Therefore, this study extended the concept of all constructs in the new model while following the assumed relationships between the constructs in the classical TAM and TPB theoretical models.In the new model, there are three latent variables of technology perception: perceived ease of use, perceived usefulness, and perceived safety riskiness. Perceived ease of use refers to existing AVs with a set of functions that are easy to operate and an operating interface that matches people’s driving habits (Hegner et al. 2019). Perceived usefulness is the ability of AVs to solve problems in travel and is a key factor in determining the public’s use of self-driving technology. This variable is influenced by perceived ease of use. Perceived safety riskiness refers to a user’s perception that autonomous vehicle accidents may cause physical injuries through self-perception of external information, and it has a negative effect on both behavioral attitude and perceived behavioral control.Hypothesis 1: Perceived ease of use has a positive effect on behavioral attitude.Hypothesis 2: Perceived ease of use has a positive effect on perceived usefulness.Hypothesis 3: Perceived usefulness has a positive effect on behavioral attitude.Travel interference and improper driver operation are the main causes of self-driving car accidents. Chinese traffic accident statistics report a phenomenon of self-driving car accidents: on the highway, after the ACC (adaptive cruise control) adaptive cruise function is turned on, Level 2 assisted driving AVs drive autonomously; that is, the system controls the speed of the vehicle. During this period, a significant percentage of drivers involved in accidents did not take back control of the vehicle in response to changes in the traffic environment outside the vehicle, such as trucks changing into their lanes (TAMPS 2021). As higher levels of autonomous driving features are developed, more and more vehicles will be equipped with different levels of autonomous driving systems; people have been found to be overly dependent on them (Chikaraishi et al. 2020). It has also been found that the misalignment of technologies and products of different levels may be the main cause of traffic accidents due to the inability of currently marketed autonomous driving technologies to replace drivers in complex road environments (Li et al. 2020).Based on the foregoing analysis, HMRP was proposed as a novel concept in this study; it characterizes the risk of potential human injury associated with the uncertainty of human manipulation in the context of human–machine collaborative driving. In addition, HMRP in the improved model refers to the extent to which users perceive this risk through external information related to AVs.It has been shown that perceived risk is negatively related to behavioral attitudes. It has been shown that functional perceived risk is more likely to influence user attitudes toward new products than affective perceived risk (Pascale et al. 2021). However, this study inferred that user perceptions of maneuvering risks in human–machine collaborative driving of AVs may negatively affect their perceived behavioral control toward adopting AVs. Therefore, Hypotheses 4 and 5 were proposed.Hypothesis 4: HMRP has a negative effect on behavioral attitude.Hypothesis 5: HMRP has a negative effect on perceived behavioral control.Subjective norms are factors that make users show consistency in their attitudes compared to others (Man et al. 2020). Because the public has a herd mentality and following habits, the influence of advertising, family, friends, and colleagues, and negative news can cause user attitudes to shift. Because users’ ability to both receive and analyze information is increasing, the influence of subjective norms on the acceptance of AVs is mainly manifested in three effects: acceptance, neutrality, and rejection, subjective norms is one of the main influencing factors for studying users’ psychological changes. Perceived behavioral control is a direct reflection of user mastery of the control ability and functions of an AV according to their own comprehensive qualities (Buckley et al. 2018). Therefore, perceived behavioral control, as a criterion for judging one’s own mastery and control of autonomous driving technology, is another major influencing factor on user acceptance.Hypothesis 6: Subjective norms have a positive influence on behavioral attitude.Behavioral attitude, as a psychological disposition on an affective state, is also a key variable in the theory of planned behavior; it is defined as the overall perception of using an AV by evaluating AV with a certain degree of approval/disapproval. Its effect on the acceptance of AVs is manifested in three main effects: acceptance, neutrality, and rejection.Hypothesis 7: Behavioral attitude has a positive influence on behavioral intention.Perceived behavioral control is a direct reflection of user mastery of the control capabilities and functions of an AV and the ability to control human–machine collaborative driving based on his or her overall qualities, experience, capabilities, and resources. Therefore, perceived behavioral control, as a criterion for evaluating and judging the user’s own ability to master and control autonomous driving technology, is another major factor influencing the acceptance of AVs.Hypothesis 8: Perceived behavioral control has a positive influence on behavioral intention.Questionnaire DesignA questionnaire was designed based on the definition of the constructs in the improvement model and the purposes of this study. The final questionnaire had three sections: Part I, Part II and Part III (see Appendix). Part I, sociodemographic factors, consisted of five variables: gender, age, education, driving experience, and knowledge related to AVs.Part II, the preexperience scale, corresponded to the seven constructs of the improved acceptance model for AVs and consisted of 27 questions. These questions are presented in the Appendix and were modified from previous studies (see sources in the Appendix). The items for PEOU and PU were expanded to incorporate the functional benefits of AVs and individual perceptions of the driving experience. Moreover, the items for BA, PBC, SN, and BI were adapted to include information on human–machine collaborative driving capabilities and policies and advertisements related to AVs. The problematic items for the new construct HMRP proposed in this study were self-developed and tested in a pilot study. All items in this part of the survey were measured on a five-point Likert-type scale.Part III, the postexperience feedback scale, was composed of 11 questions from the hybrid model of mediation and moderation (the questions on the HMRP construct were the same as those on HMRP in the preexperience scale).Survey ApproachThe survey was implemented in two phases: a pilot study and a questionnaire survey. In the pilot study, the questionnaire was modified based on suggestions from six scholars in the field; then, an online pilot test was administered to a convenience sample of 30 adult individuals. Based on feedback from the pilot study, the questionnaire was further adjusted and refined to gain clarity on the indicator questions, resulting in the revised final questionnaire, which can be found in the Appendix.For the survey, each participant completed Parts I and II of the questionnaire prior to a self-driving ride experience; Part III was completed after the experience. The survey was open to those who participated in a free ride in the L3 autonomous driving car at the Guangzhou Auto Show in China. The L3 autonomous driving car was manned by a safety officer in the driver’s seat, who only observed road conditions and did not intervene in driving unless the autonomous driving system requested that they take over. In the preexperience survey, respondents were informed that the vehicle under investigation was a Level 3 self-driving car, and the concept was explained in a promotional video. During the survey, the purpose of the survey was explained to the respondents. It was emphasized that the survey would be conducted anonymously and that personal information would be kept confidential. A total of 315 questionnaires were distributed online and on paper, and 15 (4.76%) were invalid.Analysis of Sociostatistical VariablesOf the valid respondents (n=300), 197 (65.7%) were male; the mean age of the participants was 34.0 years old (SD=6.5; 18–59 years). A total of 171 (57.0%) had a valid driver’s license, and mean actual driving experience was 5 years. In addition, 124 (41.3%) participants were able to correctly determine the most advanced level of autonomous driving available in the current market.In addition, the acceptance level of users was 57%, lower than the average acceptance level of 63% in a previous survey. The acceptance of male users was 0.3% higher than that of female users, and the differences in acceptance between different ages ranged from 0.8% to 0.12%; these statistics indicated that the influence of gender and age on acceptance could be ignored. However, level of education received had a greater impact on acceptance; the higher the level of education received, the higher the acceptance, with a maximum difference of 1.75%.Data Analysis and Model ValidationBefore hypothesis testing for the improved model, the data were analyzed; that is reliability and validity testing of the observed variables was conducted. First, the collected data were tested for reliability in order to check the consistency of the measured data. Next, exploratory factor analysis and validation factor analysis were performed to assess the validity of the components, that is, the degree of validity that could truly reflect the variables being measured to ensure the degree of accurate measurement of the questionnaire used in the study. After the relevant fitness indicators of the improved model met the requirements, the second stage of the validation of the improved model was carried out, in which path coefficient analysis was performed to test the research hypotheses in the improved model.Reliability TestReliability refers to the extent to which the results obtained are consistent when the same method is adopted to measure the same subject repeatedly. To ensure the internal consistency of the scale, SPSS version 22 software was used to test the reliability of the sample of 300 before conducting exploratory factor analysis. The value of Cronbach’s coefficient α was 0.769; its value was in the range of 0.7 to 0.8, indicating that the scale had considerable reliability.Exploratory Factor AnalysisKaiser-Meyer-Olkin and Bartlett’s Spherical TestExploratory factor analysis was implemented on the sample data using SPSS software, and the validation results are shown in Table 2. The Kaiser-Meyer-Olkin (KMO) test coefficient was 0.940, which was greater than 0.8, indicating that the designed scale data were well suited for factor analysis. The chi-square value of Bartlett’s spherical test was 10930.755 with a significance level of 0.000, which passed the significance test with a significance level of 1%, implying that the statistical test of public behavioral intention was significant (Zhou et al. 2020). The test results indicated that the sample data were suitable for validity analysis.Table 2. KMO and Bartlett testsTable 2. KMO and Bartlett testsTest itemsValueKaiser-Meyer-Olkin metric0.940 Chi-square10,930.755 Degree of freedom (df)0.741 Significance0.000Principal Component AnalysisThrough principal component factor analysis of the sample data, the study obtained factor analysis of 27 measurable variables for the seven potential variables (PU, PEOU, HMRP, PBC, SN, BA, BI) involved in the improved acceptance model, and the results are shown in Table 3. The factors were extracted by orthogonal rotation to select eigenvalues greater than 1.0 and factor loadings greater than 0.4. The cumulative contribution of the six common factors was 68.159%, indicating that they could adequately reflect the original data (Sharma and Mishra 2020).Table 3. Results of principal component factor analysisTable 3. Results of principal component factor analysisCommon factorsNumber of itemsFactor loadingsEigen valueVariance contribution rateCumulative variance contribution rateCRAVECronbach’s alphaPU40.6452.89825.43125.4310.8600.6350.877PEOU40.5982.6643.83129.2620.8560.6600.852HMRP60.6712.55710.60739.8690.8450.5830.829SN40.8503.7335.67445.5430.8930.6770.890PBC30.7463.7029.49355.0360.8410.6070.857BA30.6873.94813.12368.1590.8470.5710.837BI30.6474.725——0.8610.5590.870Cronbach’s alpha was calculated to determine the internal consistency internal consistency of the seven factors (PU, PEOU, HMRP, PBC, SN, BA, BI). Values above 0.70 for Cronbach’s alpha are generally acceptable in exploratory studies. In addition, for the reliability test of the factors, composite reliability (CR) was calculated. A cutoff value of 0.70 for CR generally indicates acceptable reliability (Hair et al. 1998). Last, average variance extracted (AVE) was calculated. AVE value should exceed 0.50, implying that a construct captures a variance greater than that caused by measurement error (Fornell and Larcker 1981). The Cronbach’s alpha, CR, and AVE of all factors in Table 3 satisfied the standard reference values, indicating good internal consistency and reliability of all constructs.Validation Factor AnalysisIn this study, validation factor analysis of the model was performed by AMOS (analysis of moment structure) version 24 software based on the questionnaire data. Options whose modification indices (MI) values did not match were removed. The obtained model fitness metrics are shown in Table 4. The results of factor analysis of the initial model were that the α coefficient was between 0.829 and 0.890, the average variance was between 0.599 and 0.677, and the comprehensive reliability was between 0.841 and 0.893. The initial model was modified according to parameters such as CMIN/DF, comparative fit index (CFI), and root-mean square error of approximation (RMSEA).Table 4. Overall fit coefficients of the improved modelTable 4. Overall fit coefficients of the improved modelFitting modelCMIN/DFGFICFIRMSEAAICBCCNFIIFIInitial model2.3290.8630.9300.0581,058.3611,070.9290.8850.931Optimal model1.3940.9250.9810.031716.039734.8060.9380.982The optimal model was obtained by constraining the substandard index. The modified model fit parameters are shown in Table 4. The CMIN/DF of the model was 1.394, the goodness of fit index (GFI) was 0.925 (>0.9), the CFI was 0.981 (>0.9), the RMSEA was 0.031 (<0.08), and the normed fit index (NFI) was 0.938 (>0.8). The latent variables of the modified model had good discriminant validity, intrinsic validity, and good fit (Serang et al. 2017).The results of the aforementioned factor analysis showed that the measurement results of the questionnaire had a high degree of fit with the content to be examined, indicating that the measurement had good structural validity (Wood et al. 2015). Therefore, the proposed model had a good degree of adaptation.Path Analysis and Verification of Structural Equation ModelAfter the reliability test and validity test of the scale data were passed, a path analysis of the structural equation model proposed in this study was conducted using AMOS version 25 software; the research hypothesis map is presented in Fig. 2.Because the assumed regression coefficient was zero (Peterson 2019) and the treatment of the approximately normally distributed random variable was positioned at 0.05, the absolute value of the CR critical ratio was 1.96. Therefore, the absolute value of the regression coefficient for each assumed path was greater than 1.96, and the p values were less than 0.05. After correction and constraint, the fitting results of the optimal model path coefficients were obtained as shown in Table 5.Table 5. Path coefficients of the structural equation model for AV acceptanceTable 5. Path coefficients of the structural equation model for AV acceptancePathsStandardization estimateTesting resultCritical ratioFrom PEOU to BA0.411*H1 accepted5.458From PEOU to PU0.180*H2 accepted2.463From PU to BA0.721*H3 accepted11.449From HMRP to BA−0.465*H4 accepted6.258From HMRP to PBC−0.691*H5 accepted10.754From SN to BA0.417*H6 accepted6.463From BA to BI0.596*H7 accepted9.725From PBC to BI0.420*H8 accepted6.291As shown in Table 5, PU and PEOU had a significant positive effect on BA, but the degree of PEOU’s effect on PU was not significant. In addition, HMRP had a significant negative effect on both BA and PBC. SN had a significant positive effect on BA, and PBC and BA were significantly positively correlated with BI.The effect of PU and PEOU on BI was consistent with the findings of previous studies (Panagiotopoulos and Dimitrakopoulos 2018; Liu et al. 2019; Zhang et al. 2021). The influence relationships of PBC, BA, and SN with BI were also consistent with previous studies (Buckley et al. 2018; Acheampong and Cugurullo 2019; Gunawan et al. 2022). Moreover, it was concluded that HMRP indirectly influences the acceptance of AVs through PBC and BA, and HMRP on acceptance of AVs has a significant negative influence relationship. However, few studies have focused on the effect of perceived safety risks in terms of human manipulation on the acceptance of AVs. Therefore, a cross-sectional comparison was not possible.Mediating and Moderating Intervention AnalysisThe foregoing results show that HMRP negatively affected AV acceptance by influencing users’ BA and PBC related to AV adoption, implying that HMRP may cause a decrease in acceptance of AVs. To weaken the negative effect of HMRP on AV acceptance, a hybrid model of mediation and moderation was developed, as shown in Fig. 3. This model was constructed on the basis of the research hypotheses in the improved acceptance model that have been validated in our study above.The main purpose of this model was to control the negative effect of HMRP on the acceptance of AVs through shunting. In the hybrid model of mediation and moderation, BI still refers to the intention to adopt AVs, representing acceptance of AVs. PBC was introduced into the path from HMRP to BI, mediating the relationship between HMRP and BI. In addition, user experience was introduced in the uplink of the mediated path from HMRP to BI via PBC, as an intervening variable to moderate the impact of HMRP on PBC.The corresponding questionnaire parts and survey methods for this model are described in the sections “Questionnaire Design” and “Survey Approach,” respectively. The study was based on 300 synchronous questionnaires. After the validity and reliability of the data were verified, the mediating effect of PBC on the relationship between HMRP and AV acceptance, and the moderating effect of user experience was analyzed as follows.Common Method Bias TestThe common method bias test was performed on the hybrid model of mediation and moderation using the Harman one-way test (Grafke and Vanden-Eijnden 2019). It was found that there were 10 factors with eigenvalues greater than 1, and the variance explained by the first factor was 17.62%, which was much less than the critical value of 40%. The results indicated that there was no significant common method bias in this study.Descriptive Statistics and Correlation Matrix of the Main VariablesThe results of the correlation analysis of the main variables in this hybrid model are shown in Table 6. The correlation coefficient between HMRP and BI for adopting AVs was −0.33, which indicates a significant negative correlation between HMRP and BI. In addition, there were significant positive correlations between user experience and BI for adopting AVs and between PBC and BI for adopting AVs. HMRP was significantly and negatively correlated with PBC.Table 6. Means, standard deviations, and correlation matrixes of the main variablesTable 6. Means, standard deviations, and correlation matrixes of the main variablesPotential variableM±SDHMRPPBCUEBIHMRP1.75±0.501PBC4.46±0.62−0.41*1——UE4.82±0.92−0.17*0.38*1—BI1.88±0.80−0.33*0.37*0.34*1Moderated Mediation Model TestFirst, a mediating effect test was conducted using Model 4 in the SPSS macro program Process version 3.3 (Kuang et al. 2021). The results showed that after controlling for the effects of educational degree and driving age, the total effect was 0.31, 95% confidence interval (CI) (0.24, 0.38), and the direct effect was 0.19, 95% CI (0.12, 0.26) (see Table 7). In addition, the mediating effect of PBC was 0.12, 95% CI (0.08, 0.15), which accounted for 38.71% of the total effect. This implies that PBC partially mediated the relationship between HMRP and BI for adopting AVs.Table 7. Mediated role with moderationTable 7. Mediated role with moderationOutcome variablePredictive variableR2Fβ95% CItPBCEducational level0.1250.18*0.06(−0.01, 0.12)1.58Driving age0.01(−0.06, 0.06)0.16HMRP−0.35(−0.43, −0.28)−10.10User experience0.32(0.25, 0.39)9.06HMRP *user experience0.14(0.07, 0.20)4.41BIEducational level0.3166.25*0.34(0.27, 0.41)10.10Driving age0.07(−0.05, 0.20)2.04PBC0.19(0.12, 0.27)5.12HMRP−0.29(−0.36, −0.21)−7.73Furthermore, a moderating effect test was conducted by adopting Model 7 in the SPSS macro program Process. The results indicated that the effects of HMRP and user experience on PBC were significant. In addition, the mediated effect value of PBC was 0.14, 95% CI (0.10, 0.19), at high user experience levels. In contrast, the mediated effect of PBC decreased to 0.06, 95% CI (0.03, 0.09), when user experience was low. This suggests that user experience moderated the mediating role of PBC in the relationship between HMRP and acceptance of AVs. Moreover, the results imply that the negative effect of HMRP on acceptance of AVs was greater when user experience was perceived to be low.To further explain the moderating effect of user experience, user experience levels were divided into two groups, high and low, according to the mean plus or minus one standard deviation, and then a simple slope test was performed (Hinz et al. 2020). Fig. 4 presents the moderating effect of user experience levels on the negative effect produced by HMRP. The x-axis represents HMRP and the y-axis represents PBC. The results show that when the level of user experience was low, there was a significant negative effect of HMRP on PBC [Bsimple=−0.49, p<0.001, 95% CI (−0.58, −0.40)]. In contrast, when the level of user experience was high, the negative effect of HMRP on PBC showed a slowing trend [Bsimple=−0.21, p<0.001, 95% CI (−0.31, −0.12)].Discussion and ConclusionThis study aimed to explain the reasons for the decline in user acceptance of AVs following a spate of AV accidents and to improve acceptance of AVs. In the context of human–machine collaborative driving, this study proposed a new factor, HMRP, related to the safety of AVs. On this basis, this study focused on exploring the effect of human manipulation risks associated with higher-level automated driving on the acceptance of AVs and the process mechanisms by which human manipulation risk affects the acceptance of AVs. The HMRP variable was introduced to construct an improved acceptance model for AVs. The test results of the improved model showed that HMRP negatively and significantly influenced PBC and BA. In addition, PBC and BA were significantly and positively related to AV acceptance. Therefore, it can be concluded that HMRP is the main reason for the decrease in acceptance of AVs. Moreover, it was also found that user confusion about the conceptual scope of autonomous driving technology and misuse of functions were the main reasons that caused users to doubt their control capabilities.In addition, it was found that HMRP indirectly affected the acceptance of AVs through PBC and BA, and there was a significant negative correlation between HMRP and acceptance of AVs. This means that in the context of human–machine collaborative driving, when a user’s perceived manipulation risk of the autonomous driving function is higher, the user’s perceived ability to manipulate and control AVs may decrease, leading to a weakening of the user’s confidence and sense of safety in the human–machine collaborative control of AVs, which in turn indirectly makes the user’s acceptance of AVs lower. However, HMRP may also make user attitudes toward AVs less positive, further reducing AV acceptance.To improve the acceptance of AVs, a hybrid model of mediation and moderation with added user experience was proposed, based on the foregoing conclusions. Mediation and moderation analysis of the hybrid model was conducted by obtaining simultaneous survey data from respondents after their test ride experience. The results showed that PBC partially mediated the negative influence relationship between HMRP and AV acceptance, and that user experience had a significant intervention effect on the relationship between HMRP and PBC. It was also found that users if users became familiar with the application scenarios and conditions of use autonomous driving function, it will promote users’ ability to manoeuvre and control AVs. In addition, the improvement of users’ perceived ability to maneuver and control AVs may have weakened the negative effect of HMRP on the acceptance of AVs. Therefore, it can be concluded that the hybrid model of mediation and moderation improved the acceptance of AVs. The findings of this study can contribute to user perceptions of AV safety, which in turn can promote driving safety and technology development.With the continuous increase in the number of civilian vehicles, the marketing of high-level AVs, and the frequency of extreme weather, the complexity of mixed traffic is likely to increase in the future. This poses a great challenge to the safety of human–machine collaborative driving of AVs. Future research needs to expand the scope of the questionnaire, collect more information about typical application scenarios for autonomous driving, and further explore theories and methods to improve the acceptance of AVs.References Acheampong, R. A., and F. Cugurullo. 2019. “Capturing the behavioural determinants behind the adoption of autonomous vehicles: Conceptual frameworks and measurement models to predict public transport, sharing and ownership trends of self-driving cars.” Transp. Res. Part F Traffic Psychol. Behav. 62 (Apr): 349–375. Bansal, P., K. M. Kockelman, and A. Singh. 2016. “Assessing public opinions of and interest in new vehicle technologies: An Austin perspective.” Transp. Res. Part C Emerging Technol. 67 (Jun): 1–14. Buckley, L., S. A. Kaye, and A. K. Pradhan. 2018. “Psychosocial factors associated with intended use of automated vehicles: A simulated driving study.” Accid. Anal. Prev. 115 (Jun): 202. Chikaraishi, M., D. Khan, B. Yasuda, and A. Fujiwara. 2020. “Risk perception and social acceptability of autonomous vehicles: A case study in Hiroshima, Japan.” Transp. Policy 98 (Nov): 105–115. Davis, F. D., R. P. Bagozzi, and P. R. Warshaw. 1989. “User acceptance of computer technology: A comparison of two theoretical models.” Manage. Sci. 35 (8): 982–1003. Detjen, H., S. Faltaous, B. Pfleging, S. Geisler, and S. Schneegass. 2021. “How to increase automated vehicles’ acceptance through in-vehicle interaction design: A review.” Int. J. Hum.-Comput. Interact. 37 (4): 308–330. Fornell, C., and D. F. Larcker. 1981. “Evaluating structural equation models with unobservable variables and measurement error.” J. Marketing Res. 18 (1): 39–50. Gkartzonikas, C., and K. Gkritza. 2019. “What have we learned? A review of stated preference and choice studies on autonomous vehicles.” Transp. Res. Part C Emerging Technol. 98 (Jan): 323–337. Gold, C., M. Körber, C. Hohenberger, D. Lechner, and K. Bengler. 2015. “Trust in automation–before and after the experience of take-over scenarios in a highly automated vehicle.” Procedia Manuf. 3 (7): 3025–3032. Grafke, T., and E. Vanden-Eijnden. 2019. “Numerical computation of rare events via large deviation theory.” Chaos: Interdiscipl. J. Nonlinear Sci. 29 (6): 063118. Gunawan, I., A. A. N. P. Redi, A. A. Santosa, M. F. N. Maghfiroh, A. H. Pandyaswargo, and A. C. Kurniawan. 2022. “Determinants of customer intentions to use electric vehicle in Indonesia: An integrated model analysis.” Sustainability 14 (4): 1972. Ha, T., S. Kim, D. Seo, and S. Lee. 2020. “Effects of explanation types and perceived risk on trust in autonomous vehicles.” Transp. Res. Part F Traffic Psychol. Behav. 73 (Aug): 271–280. Hair, J. F., R. E. Anderson, R. L. Tatham, and W. C. Black. 1998. Multivariate data analysis. 5th ed. Hoboken, NJ: Prentice-Hall. Hegner, S. M., A. D. Beldad, and G. J. Brunswick. 2019. “In automatic we trust: Investigating the Impact of trust, control, personality characteristics, and extrinsic and intrinsic motivations on the acceptance of autonomous vehicles.” Int. J. Hum.-Comput. Interact. 35 (4): 1–12. Jung, S. J., and H. S. Kim. 2021. “A study on the intention of mobile delivery apps: Applying the technology acceptance model (tam).” Culinary Sci. Hospitality Res. 26 (12): 24–32. Kang, M., J. Song, and K. Hwang. 2020. “For preventative automated driving system (PADS): Traffic accident context analysis based on deep neural networks.” Electronics 9 (11): 1829. Kuang, Y. P., J. L. Yang, and M. C. Abate. 2021. “Farmland transfer and agricultural economic growth nexus in China: Agricultural TFP intermediary effect perspective.” China Agric. Econ. Rev. 14 (1): 184–201. Li, L., J. Gan, Z. Yi, X. Qu, and B. Ran. 2020. “Risk perception and the warning strategy based on safety potential field theory.” Accid. Anal. Prev. 148 (Dec): 105805. Liu, P., R. Yang, and Z. Xu. 2019. “Public acceptance of fully automated driving: Effects of social trust and risk/benefit perceptions.” Risk Anal. 39 (2): 326–341. Man, S. S., W. Xiong, F. Chang, and A. H. S. Chan. 2020. “Critical factors influencing acceptance of automated vehicles by Hong Kong drivers.” IEEE Access 8 (Jun): 109845–109856. Panagiotopoulos, I., and G. Dimitrakopoulos. 2018. “An empirical investigation on consumers’ intentions towards autonomous driving.” Transp. Res. Part C Emerging Technol. 95 (Oct): 773–784. Pascale, M. T., D. Rodwell, P. Coughlan, S. A. Kaye, S. Demmel, S. G. Dehkordi, and S. Glaser. 2021. “Passengers’ acceptance and perceptions of risk while riding in an automated vehicle on open, public roads.” Transp. Res. Part F Traffic Psychol. Behav. 83 (Nov): 274–290. Perello-March, J. R., C. G. Burns, S. A. Birrell, R. Woodman, and M. T. Elliott. 2022. “Physiological measures of risk perception in highly automated driving.” IEEE Trans. Intell. Transp. Syst. 23 (5): 4811–4822. Rahman, M. M., M. F. Lesch, W. J. Horrey, and L. Strawderman. 2017. “Assessing the utility of TAM, TPB, and UTAUT for advanced driver assistance systems.” Accid. Anal. Prev. 108 (Nov): 361–373. Shalender, K., and N. Sharma. 2021. “Using extended theory of planned behaviour (TPB) to predict adoption intention of electric vehicles in India.” Environ. Dev. Sustainability 23 (4): 665–681. Sharma, I., and S. Mishra. 2020. “Modeling consumers’ likelihood to adopt autonomous vehicles based on their peer network.” Transp. Res. Part D Transp. Environ. 87 (Oct): 102509. Shin, K. J., and S. Managi. 2017. Consumer demand for fully automated driving technology: Evidence from Japan. Tokyo, Japan: RIETI. Sinha, A., S. Chand, V. Vu, H. Chen, and V. Dixit. 2021. “Crash and disengagement data of autonomous vehicles on public roads in California.” Sci. Data 8 (1): 1–10. Sun, C., S. Zheng, Y. Ma, D. Chu, J. Yang, Y. Zhou, and T. Xu. 2021. “An active safety control method of collision avoidance for intelligent connected vehicle based on driving risk perception.” J. Intell. Manuf. 32 (5): 1249–1269. Talebian, A., and S. Mishra. 2018. “Predicting the adoption of connected autonomous vehicles: A new approach based on the theory of diffusion of innovations.” Transp. Res. Part C Emerging Technol. 95 (Oct): 363–380. TAMPS (Traffic Administration of the Ministry of Public Security). 2021. Annual report on road traffic accident statistics. Wuxi City, China: Institute of Traffic Management Science, Ministry of Public Security. Wood, N. D., D. C. Akloubou Gnonhosou, and J. W. Bowling. 2015. “Combining parallel and exploratory factor analysis in identifying relationship scales in secondary data.” Marriage Family Rev. 51 (5): 385–395. Zhang, T., D. Tao, X. Qu, X. Zhang, R. Lin, and W. Zhang. 2019. “The roles of initial trust and perceived risk in public’s acceptance of automated vehicles.” Transp. Res. Part C Emerging Technol. 98 (Jan): 207–220. Zhang, T., W. Zeng, Y. Zhang, D. Tao, G. Li, and X. Qu. 2021. “What drives people to use automated vehicles? A meta-analytic review.” Accid. Anal. Prev. 159 (Sep): 106270. Zhou, F., Z. Zheng, J. Whitehead, S. Washington, R. K. Perrons, and L. Page. 2020. “Preference heterogeneity in mode choice for car-sharing and shared automated vehicles.” Transp. Res. Part A Policy Pract. 132 (Feb): 633–650. Zoellick, J. C., A. Kuhlmey, L. Schenk, D. Schindel, and S. Blüher. 2019. “Amused, accepted, and used? Attitudes and emotions towards automated vehicles, their relationships, and predictive value for usage intention.” Transp. Res. Part F Traffic Psychol. Behav. 65 (Aug): 68–78.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *