MH370 Video Analysis: Measuring Stereoscopic Distortion to Infer Source Satellite Characteristics

Generate New Template

public (1567) , uap (1561) , disclosure (1383) , ufo (1296) , transparency (1220) , government (1066) , information (1011) , disinformation (943) , ufos (919) , campaign (862) , congress (769) , people (678) , phenomena (648) , national (632) , urge (586) , regarding (565) , issue (565) , unidentified (561) , security (538) , truth (506) , american (506) , support (503) , potential (498) , know (477) , trust (469) , uaps (467) , writing (462) , accountability (461) , time (424) , scientific (410) , intelligence (381) , understanding (370) , act (363) , rep (357) , members (354) , defense (331) , action (314) , efforts (311) , research (310) , committee (309) , objects (302) , related (301) , legislation (297) , house (290) , secrecy (288) , oversight (286) , template (283) , ndaa (282) , concern (280) , being (274) ,

 Read More 

Disclaimer: I’m a complete amateur, and this process and its conclusions are coming from very basic understanding of technology and orbital mechanics. I’m interested to hear what some experts out there think

Here I’m going to duplicate the exact same procedure described by u/somethingsomethingbe in his great video, except my goal is instead to measure the magnitude of the stereoscopic effect.

If it were true that they are “twin” satellites on different orbits, we would see a changing distortion vector in the overlaying stereoscope. Basically, distance between left pic vs. right pic changes (either in pixel distance or pixel direction) over the course of the ~1 minute video since the satellites are changing position relative to each other.

First I normalize the two images early on to a single consistent object — the passenger craft. Starting from u/somethingsomethingbe‘s video, I do this here as best I can from the moment it enters the top frame, then adjust the overlay position to best match.

https://imgur.com/yeEZv5I

That is a horizontal position displacement of 646.5 pixels, no vertical displacement (which itself is interesting as it stays true throughout this exercise)

I pick two more sections of video now, and compare displacement. This one shows about a 3-pixel horizontal displacement (highlighted by vertical guide lines)

https://imgur.com/fmlOsR8

And this one, immediately before the flash, shows about a 2-pixel horizontal displacement (highlighted by vertical guide lines).

https://imgur.com/XRI5vZQ

This is all very little distortion and well within our own measurement error margin. It suggests the lenses were stationary relative to eachother for the duration of the video.

Next I wanted to look at frame matching. I picked the fastest moving thing in this video — entry of the first UAP sphere at top right frame. Then compared between layouts and between two frames. Same displacement of about 3 pixels (highlighted by guide lines).

https://imgur.com/DVG7wsl

This is interesting as well — unless some other form of post-processing occured, it implies that the shutter speed of the two lenses are exactly synchronized, capturing the object at the exact same position and time. This is a technical challenge to do between two separated pieces of equipment, and I’m not sure even the USG would have a need to create the mechanisms for this. The capability, IMO, would have to be embedded in two systems sharing the same mission set, and they’d have to constantly re-check synchronicity. This is a much easier thing to do if the lenses instead shared one housing, mechanism, or sensor.

Lastly, just for kicks, the biggest distortion I saw was in the clouds in the last frame. It’s 8 pixels, also horizontal only (highlighted by vertical guide lines).

https://imgur.com/uFcAiOA

TL;DR I think it’s one of two possibilities:

Two satellites on the SAME/SIMILAR orbit (like both geostationary), close to each other at the time, and likely with the same mission set (to share sensor resolution and shutter timing) A single satellite with two lenses capturing simultaneously.

Last thought: The 3-pixel distortion distance combined with the near-perfect shutter speed-matching gives us good error margins for an image ‘vector’.

submitted by /u/fheuwial
[link] [comments]