2024年5月11日发(作者:)
Fast Panorama Stitching
Introduction
Taking panoramic pictures has become a common scenario and is included in most smartphones’ and
tablets’ native camera applications. Panorama stitching applications work by taking multiple images,
algorithmically matching features between images, and then blending them together. Most
manufacturers use their own internal methods for stitching that are very fast. There are also a few open
source alternatives.
For more information about how to implement panorama stitching as well as a novel dual camera
approach for taking 360 panoramas, please see my previous post here: /en-
us/articles/dual-camera-360-panorama-application. In this paper we will do a brief comparison between
two popular libraries, then go into detail on creating an application that can stitch images together
quickly.
OpenCV* and PanoTools*
I tested two of the most popular open source stitching libraries: OpenCV and PanoTools. I initially
started working with PanoTools—a mature stitching library available on Windows*, Mac OS*, and
Linux*. It offers many advanced features and consistent quality. The second library I looked at is
OpenCV. OpenCV is a very large project consisting of many different image manipulation libraries and
has a massive user base. It is available for Windows, Mac OS, Linux, Android*, and iOS*. Both of these
libraries come with sample stitching applications. The sample application with PanoTools completed our
workload in 1:44. The sample application with OpenCV completed in 2:16. Although PanoTools was
initially faster, we chose to use the OpenCV sample as our starting point due to its large user base and
availability on mobile platforms.
Overview of Initial Application and Test Scenario
We will be using OpenCV’s sample application “cpp-example-stitching_detailed” as a starting point. The
application goes through the stitching pipeline, which consists of multiple distinct stages. Briefly, these
stages are:
1.
2.
3.
4.
5.
6.
Import images
Find features
Pairwise matching
Warping images
Compositing
Blending
For testing, we used a tablet with an Intel® Atom™ quad-core SoC Z3770 with 2GB of RAM running
Windows 8.1. Our workload consisted of stitching together 16 1280x720 resolution images.
Multithreading Feature Finding Using OpenMP*
Most of the stages in the pipeline consist of repeated work that is done on images that are not
dependent on each other. This makes these stages good candidates for multithreading. All of these
stages use a “for” loop, which makes it very easy for us to use OpenMP to parallelize these blocks of
code.
The first stage we will parallelize is feature finding. First add the OpenMP compiler directive above the
for loop:
#pragma omp parallel for
for (int i = 0; i < num_images; ++i)
The loop will now execute multithreaded; however, in the loop we are setting the values for the
variables “full_img” and “img”. This will cause a race condition and will affect our output. The easiest
way to solve this problem is to convert the variables into vectors. We should take these variable
declarations:
Mat full_img, img;
and change them to:
vector
vector
Now within the loop, we will change each occurrence of each variable to its new name.
full_img
becomes
full_img[i]
img
becomes
img[i]
The content loaded in full_img and img is used later within the application, so to save time we will not
release memory. Remove these lines:
full_e();
e();
Then we can remove this line from the composting stage:
full_img = imread(img_names[img_idx]);
full_img is referred to again during scaling within the composting loop. We will change the variable
names again:
full_img
becomes
full_img[img_idx]
img
becomes
img[img_idx]
Now the first loop is parallel. Next, we will parallelize the warping loop. First, we can add the compiler
directive to make the loop parallel:
发布评论