Read More 

Disclaimer: I’m a complete amateur, and this process and its conclusions are coming from very basic understanding of technology and orbital mechanics. I’m interested to hear what some experts out there think

Here I’m going to duplicate the exact same procedure described by u/somethingsomethingbe in his great video, except my goal is instead to measure the magnitude of the stereoscopic effect.

If it were true that they are “twin” satellites on different orbits, we would see a changing distortion vector in the overlaying stereoscope. Basically, distance between left pic vs. right pic changes (either in pixel distance or pixel direction) over the course of the ~1 minute video since the satellites are changing position relative to each other.

First I normalize the two images early on to a single consistent object — the passenger craft. Starting from u/somethingsomethingbe‘s video, I do this here as best I can from the moment it enters the top frame, then adjust the overlay position to best match.

That is a horizontal position displacement of 646.5 pixels, no vertical displacement (which itself is interesting as it stays true throughout this exercise)

I pick two more sections of video now, and compare displacement. This one shows about a 3-pixel horizontal displacement (highlighted by vertical guide lines)

And this one, immediately before the flash, shows about a 2-pixel horizontal displacement (highlighted by vertical guide lines).

This is all very little distortion and well within our own measurement error margin. It suggests the lenses were stationary relative to eachother for the duration of the video.

Next I wanted to look at frame matching. I picked the fastest moving thing in this video — entry of the first UAP sphere at top right frame. Then compared between layouts and between two frames. Same displacement of about 3 pixels (highlighted by guide lines).

This is interesting as well — unless some other form of post-processing occured, it implies that the shutter speed of the two lenses are exactly synchronized, capturing the object at the exact same position and time. This is a technical challenge to do between two separated pieces of equipment, and I’m not sure even the USG would have a need to create the mechanisms for this. The capability, IMO, would have to be embedded in two systems sharing the same mission set, and they’d have to constantly re-check synchronicity. This is a much easier thing to do if the lenses instead shared one housing, mechanism, or sensor.

Lastly, just for kicks, the biggest distortion I saw was in the clouds in the last frame. It’s 8 pixels, also horizontal only (highlighted by vertical guide lines).

TL;DR I think it’s one of two possibilities:

Two satellites on the SAME/SIMILAR orbit (like both geostationary), close to each other at the time, and likely with the same mission set (to share sensor resolution and shutter timing) A single satellite with two lenses capturing simultaneously.

Last thought: The 3-pixel distortion distance combined with the near-perfect shutter speed-matching gives us good error margins for an image ‘vector’.

submitted by /u/fheuwial
[link] [comments]