Real-time and ultrahigh accuracy image synthesis algorithm for full field of view imaging system


System design and simulation

To get the whole FOV, we first designed and modeled a 19 cameras imaging system are placed in a given position in and simulated the FOVs of the 19 cameras in SolidWorks software platform. Figure 4 shows the FOV model of 19 cameras. Figure 4a shows FOV of a single camera, and 19 small FOVs realize the nondestructive detection without dead-zone, the effective FOV of the camera is 100°(θ = 100°). Figure 4b, c are the overlap models of 19 FOVs from the different viewing angle. In the simulation, the fabricated system reaches 360° and the vertical angle reaches more than 240°.

Figure 4

FOV of 19 cameras. (a) FOV of a single camera. (b)Overlap model of the view 1. (c) Overlap model of the view 2.

For the global camera layout, we establish a three-dimensional coordinate system to describe the distribution characteristics of 19 imaging channels. Based on SolidWorks software platform, the rotation angles of 19 imaging channels are shown in Table 1. Here i represents the number of channels, θ is the angle of the imaging channel to the X-axis and β is that of the imaging channel to the Z-axis. The imaging channel A is the center of 19 channels, which is regarded as a reference. Then we define the rotation angle around it as counterclockwise (clockwise) to negative (positive).

Table 1 The angle of 19 imaging channels in space (°).

RUF algorithm for full field of view imaging system

To prove the efficiency of the proposed algorithm, we fabricate a multi-camera imaging system consisting of 19 cameras illustrated in Fig. 5a. The spherical frame is made of aluminum. 19 holes was drilled on it to install 19 cameras. The radius of the frame is ~ 130 mm and the thickness is ~ 6 mm. The camera is shown in Fig. 5b, which consists of a lens and an image sensor. And 13-mm-diameter camera has a FOV of 100° and its focal length is ~ 3.5 mm. Specifically, 19 cameras are used for information capture and the 19 images are captured at 30 frames per second. One computer is used to connect the cameras and storage the data, which is really the hardware carrier used to realize real-time imaging.

Figure 5
figure5

The preparation result of the fabricated system (130 mm × 130 mm × 6 mm). (a) The structure of the fabricated system. (b) The composition of single imaging channel.

In the experiment, we use the fabricated system to take the image of a building. Firstly, the fabricated system captured the whole building. Then, the 19 images are synthesized using the proposed algorithm. We note that the algorithm is working on a PC (Intel Core i9-9880H CPU @2.3 GHz/4.8 GHz + RTX2080) equipped with Windows10 operating system and is based on the platform of vs2019 + opencv4.2. Each image has a resolution of 2,592 × 1,944. The 19 images are shown in Fig. 6. The whole-FOV image is shown in Fig. 7. From Fig. 7, we see that the synthesized image has large FOV, high resolution and low distortion. In the experiment, the whole FOV of the image is 360° × 240°. We can see each part of the image is very clear, here optical resolution reaches up to 95 megapixels. As an engineering requirement, real-time imaging is yet realized.

Figure 6
figure6

The 19 images (captured on 13 June 2019 at 15:26).

Figure 7
figure7

The whole-FOV image captured by fabricated system.

We also compare the fabricated system with a conventional fisheye camera with 5-megapixel as shown in Fig. 8. Both systems can obtain a large-FOV image. However, if we compare the details, the performances of the two systems are different.

Figure 8
figure8

The comparison of the fisheye camera and our fabricated imaging system. (a) 360° fisheye image captured by fisheye camera:the regions 1–3 are the enlarged details of the labelled regions in (a). (b) The synthesized image with large FOV, high resolution and low distortion: images 1–3 are the enlarged details of the labelled regions in (b).

Comparing the details of the two images in region 1, the fabricated system demonstrates that part of scene from the bridge bottom to the grille ceiling. It provides accurate information, for example, exactly how many linear grilles (50) are between the 2 elongated white boards on the building ceiling. Also, exactly how many grilles (3) are occupied by one white board, which is also clear to see the details of window on the wall. However, it’s unable to see clearly using the fisheye camera.

Comparing the details of the two images in region 2, the conventional fisheye camera is unable to take the image of the whole scene with uniform exposure. For example, some parts of the scene may be overexposed, and the other parts of the scene may be underexposed due to complex lighting environment in real world. However, the fabricated system avoids this problem. The fabricated system shows an example high-dynamic-range (HDR) image, the sunlight of that varies from bright wall to the dark ceiling and returns to the colorful scene, which is difficult to get uniform exposure using the fisheye camera (see region 2 in fisheye image). From the labeled-yellow region in the fabricated system, we can see characters in the exhibition hall clearly. The comparison illustrates an HDR synthesized image without the partial darkness covers the color information of the object space. This difference mainly benefits from independent exposure of each camera, which more accurately matches human vision.

Comparing the details of the two images in region 3, the conventional imaging system has serious distortion due to large FOV. However, the proposed algorithm synthesizes multiple low-dynamic-range (LDR) images into an HDR image. Each camera has relatively small FOV, which largely reduces the distortion. For instance, the white border of cuboid building in the synthesized image is clearly straight while that in the fisheye image is curved. A combination of system and algorithm provides a competitive advantage over the comparison system. Therefore, our fabricated system can get large-FOV image with low-distortion and high-resolution.

Accuracy of RUF

To prove the accuracy of the proposed algorithm, we did some experiments and illustrated the whole imaging process of RUF image synthesis algorithm.

An example for points of interest detection-filtration-matching is illustrated in Fig. 9a–g. Figure 9a, b are the input images, Fig. 9c, d show that feature points are marked using circles with various colors. Our proposed variable iteration method is used to remove the bad points of interest, and the matching result is illustrated in Fig. 9e, where the blue points of interest are discarded and the red are matched as good matches.

Figure 9
figure9

A near-field scene for RUF imaging, (a,b) original images, (c,d) points of interest are extracted, (e) variable iteration method for the rough matching, (f) optimal matching for the precise matching, (g) the result for image reconstruction.

Due to the introduction of variable thresholds, the matching precision is improved a lot until the good matches are retained (see previous “RUF algorithm for full field of view imaging system” section for details). Considering the computation, the optimal matches based on equal slopes is expected (see previous “RUF algorithm for full field of view imaging system” section for details). The matched result is illustrated in Fig. 9f, the red matched pairs with equal slopes are reserved and used for finding the optimal solution. The result for image reproduction is illustrated in Fig. 9g, which proves that the RUF method has the ability to adapt to complex environment including scale scaling, uneven illumination and rotation. RUF enables real-time, 100% accuracy and high-resolution imaging to be relized.

As illustrated in Fig. 10, the RUF example data is given. In this step, GPU acceleration and multithread technology enables real-time imaging to be possiable, and the matched pairs based on equal slopes can always be found to be the best solution, Fig. 10 illustrates RUF registration information achieves 100% accuracy in a near-field scene. We introduce SSIM20 to quantitatively evaluate image quality. We obtain 2 blue regions of the patchwork areas in the composite picture (see Fig. 9g for blue regions). As illustrated in Table 2, the values of SSIM1 (comparing the left-blue region with identical region in original image a) and SSIM2 (comparing the right-blue region with identical region in original image b) are computed. Image synthesis algorithm is able to adapt to complex environments. Previous sections have proved the adaptability of RUF algorithm to angle, light and scale, different rotation angles, and light conditions also cause SSIM to drop. This decline is predictable and acceptable. The red and purple regions are selected to operate the further verification, Table 2 shows the SSIM value close to split seam can be stable above 0.95 and non-stitched region can be stable above 0.98. The results demonstrate that image quality is clearly improved (SSIMmax = 1). This is because that both images are from the original image (theoretically, SSIM = 1). Due to the distortion, rotation, scaling and other performance effects of the algorithm for the synthesized image, the measured SSIM value must be less than 1, but the stitching accuracy is infinitely close to 100%.

Figure 10
figure10

The line chart of RUF data.

Table 2 The image quality evaluation for near-field scene.

In order to further prove the ultrahigh accuracy of the algorithm, a far-field scene is illustrated in Fig. 11. Figure 11 shows all points of interest are right (100% registration rate). SSIM is introduced to evaluate the image quality, and Fig. 11g demonstrates SSIM indexes of 2 yellow-regions near the seam all are 1.0. Therefore, the proposed algorithm with real-time imaging can achieve ultrahigh accuracy (100%), which is of great significance for the application of algorithms in engineering.

Figure 11
figure11

A far-field scene for RUF imaging, (a,b) original images, (c,d) points of interest are extracted, (e) variable iteration method for the rough matching, (f) optimal matching for the precise matching, (g) the result for image reconstruction.

The processing time of RUF for full-FOV system is described in Table 3. Table 3 shows the required time for each step of full-FOV algorithm (for full-FOV system: 19 images). Table 4 demonstrates the detailed processing time for various number of images (2, 7, and 19), where near-field and far-field scenes correspond to 227 ms and 158 ms respectively. Comparing with serial time, parallel processing improves by an order of magnitude.

Table 3 The time of algorithm step for full-field-of-view imaging system.
Table 4 The detailed processing time for various number of images.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *