The internet has a limited address space, such that any publicly reachable IP address is constantly bombarded by scanners, bots, automated hacking scripts, mapping out ports, probing for vulnerabilities, taking stock of the network.

The way von neumann probes work can be compared to this. Although the universe is infinite, if you have self-replicating interstellar probe technologies and you set a limited time scale, there is a limited number of planets you can visit. If life comparable to ourselves is limited to inhabiting planets like Earth, the number of planets to visit reduces further.

That means that like the internet, this relatively small number of planets to visit, in this limited time frame, will have each planet visited by multiple von neumann probes regularly. This could explain how Earth is visited by multiple different UAPs, i.e. multiple von neumann probe capable civilizations are within range.

A probe would be interested in finding information and reproducing itself. It would be more successful if stealthy, and non-confrontational, so other parties leave it alone. This explains why it would be hard to detect.

As these probes are working autonomously, far away from home, it is not inconceivable that they can have buggy behavior, explaining why they sometimes crash. They can encounter situations unforeseen or simply break from old age.

If the designers of the probes have generally useful robots available, they may find it easier to design a probe that can also accommodate humanoids, and then have humanoid-like robots control them. These robots could be partly or fully biological. That might be easier than to develop a specific probe-control system, just use an off-the-shelf robot instead.

Just some thoughts on recent developments.

submitted by /u/Xoknit
[link] [comments] 

Read More