It’s fun to think about how they would have advanced satellite imagery that “debunks physics”. Let’s consider:

Generate New Template

public (1567) , uap (1561) , disclosure (1383) , ufo (1296) , transparency (1220) , government (1066) , information (1011) , disinformation (943) , ufos (919) , campaign (862) , congress (769) , people (678) , phenomena (648) , national (632) , urge (586) , regarding (565) , issue (565) , unidentified (561) , security (538) , truth (506) , american (506) , support (503) , potential (498) , know (477) , trust (469) , uaps (467) , writing (462) , accountability (461) , time (424) , scientific (410) , intelligence (381) , understanding (370) , act (363) , rep (357) , members (354) , defense (331) , action (314) , efforts (311) , research (310) , committee (309) , objects (302) , related (301) , legislation (297) , house (290) , secrecy (288) , oversight (286) , template (283) , ndaa (282) , concern (280) , being (274) ,

– sparsity reconstruction: you can get surprising gains by reconstructing sparse signals. The idea is you have enough samples and accept you are missing fine details, and but know that those finer frequency components are subtly distorting the coarser frequency components you *do* have, and through various signal processing (sparsity reconstruction) you can bring those finer frequency components out of the image with enough sparse (poorer quality / coarser) samples. The tech solution is have a higher shutter speed / frame capture rate, or having multiple cameras snapping at the same time. Think “compound eye” like on a fly.

– traditional sensor fusion: you have your crappy, optically limited satellite. but you also have: transponder data to identify the airplane type; 3d models of said airplane type; lidar / laser bounce data from other platforms of the airfact in flight; radar data; weather data including details sun spectra, atmospheric medium characteristics; all these things you can mathematically “add in” to your Bayesian “belief system” about what the image should look like, based on physics and models, and you can produce a higher resolution “sensor fusion composite” image.

– exotic sensor fusion: same as above, but using exotic / non-public platforms. This could include things, like high res lidar onboard the satellite, IR lidar, microdrones sensor swarm, other LEO sensor platforms that are not released.

– mix and match the above, and combine it with AI to do image interpolation / enhancement / enlargement / super-resolution.

The point is: lots of data, excellent math to combine it, excellent AI to transform it.

I think it’s safe to say that objections such as “How can the government secretly obtain high resolution images? It’s preposterous!”, when said endeavour has been a key objective of governments for 60 years, are not exactly “strong objections” to try to weaken the case for Hypothesis A “Satellite video is real”.

If unconvinced, try considering consumer camera tech advancement. Remember crappy digital cameras? You know how that wasn’t that long ago. Now look what your “piece of shit” iPhone can do. And that’s not even “high end” consumer tech. It’s not about avoiding physics, it’s about squeezing as much out of it as possible. Are you confident you can say you’ve considered and dismissed everything that might have been tried in this field, by very smart people, over 60 years?

The stakes are geopolitical advantages. The budgets are nation-state size. Do you want to reframe your objections or persist?

I think more convincing ways to debunk the satellite video are to focus on provenance and narrative. Provenance: who had this imagery and why. Narrative: why did the aliens let it be filmed.

submitted by /u/Secret-Reference3830
[link] [comments] 

Read More