AbstractTemporary traffic flow observation plays a pivotal role in transportation planning, consultation, and decision-making, and it supplements fixed long-term traffic observation data. Although substantial automated technology has been used for traffic observation, such technology is limited by the temporariness and uncertainty of observation points; moreover, these methods are difficult to use in temporary observations. In this study, the YOLO_V3 algorithm, which is based on deep learning, is used for vehicle detection based on roadside videos of temporary observations. Moreover, a secondary detection framework based on vehicle detection and traffic-counting regions is proposed. Then a traffic-counting pattern with Kalman filter, Hungarian allocation, and perspective projection transformation is established. In addition, by collecting multiple sets of actual video data, the effectiveness of the method under different conditions is analyzed in terms of three indicators: camera intersection angle with the road, erection height, and road traffic density. Results show that at a camera height of 3 m and a roadside angle of 30°, the accuracy is approximately 95%. However, traffic flow accuracy drops to approximately 90% when vehicles are blocked by large buses and trucks during detection. In this study, the algorithm is tested for execution efficiency using 1080p video streams on a Windows 10 × 64 operating system, a 2080Ti graphics card, a 64 GB RAM, and an i7-7820X CPU. Results show that camera installation angle and height have no considerable effect on operation efficiency. Under low-density traffic, the frames per second (FPS) value is approximately 44; under high-density traffic, the FPS value drops to approximately 33, indicating that the method still has high execution efficiency and can be used for real-time video traffic counting.