
There’s been a lot of talk about 2027 being the date needed for disclosure because of an impending event. I saw an article that mentioned the fear that AI technology would destroy humanity at some point. Here’s parts of the article.
A former OpenAI governance researcher has made a chilling prediction: the odds of AI either destroying or catastrophically harming humankind sit at 70 percent.
After joining OpenAI in 2022 and being asked to forecast the technology’s progress, the 31-year-old became convinced not only that the industry would achieve AGI by 2027 but also that there was a great probability it would catastrophically harm or even destroy humanity.
I just thought that could possibly be one reason. Any thoughts?
submitted by /u/macallanenigma
[link] [comments]