While Google says proper hardware can help minimize stitch lines, factors like parallax, or that effect where straight lines appear to curve together in the distance, can create problems for the software. A slight time difference between each photo can still create oddities where the images meet, which can pop up often since Street View Cameras are often mounted on top of moving cars. To illustrate, Google shared images of the Sydney Opera House with off-kilter architecture and a panorama of the Tower Bridge in London where the bridge doesn’t actually meet up with the other half.
“In order to provide more seamless Street View images, we’ve developed a new algorithm based on optical flow to help solve these challenges,” wrote software engineer Mike Kraninin and research scientist Ce Liu in their blog post. “The idea is to subtly warp each input image such that the image content lines up within regions of overlap. This needs to be done carefully to avoid introducing new types of visual artifacts. The approach must also be robust to varying scene geometry, lighting conditions, calibration quality, and many other conditions.”
To correct the stitch, Google divided their approach into two different steps. The first is optical flow or computing where the stitched photo flows from one individual shot to the other. This merge area is down-sampled in order for the computer to handle the process quickly.
Once the merger areas are recognized, the software warps both sides of the stitch simultaneously until both sides properly align. The image is then adjusted using a nonlinear optimization method, which makes the process less likely to correct one oddity while introducing another.
With the program, Google Street View images from iconic landmarks to downtown hotspots will see a boost with fewer stitch lines. Google says the new algorithm was recently added to the stitching process and users should start seeing the improved views soon.