Send CathInfo's owner Matthew a gift from his Amazon wish list:
https://www.amazon.com/hz/wishlist/ls/25M2B8RERL1UO

Author Topic: Coming AI demonic dangers  (Read 18299 times)

0 Members and 1 Guest are viewing this topic.

Offline Cera

  • Hero Member
  • *****
  • Posts: 6723
  • Reputation: +3089/-1600
  • Gender: Female
  • Pray for the consecration of Russia to Mary's I H
Re: Coming AI demonic dangers
« Reply #15 on: October 06, 2025, 03:09:20 PM »
  • Thanks!0
  • No Thanks!0
  • From a longer article:
    https://www.touchstonemag.com/archives/article.php?id=36-06-029-f&readcode=10996&readtherest=true#therest


    . . . Over the last few months alone, AIs have been generating convincing essays, astonishingly realistic photos, numerous recordings, and impressive fake videos. Just recently, Kuwait debuted an entirely fake “AI newsreader,” which promises “new and innovative content.”  Fedha  looks, sounds, and behaves like a real person, and has been given an old Kuwaiti name meaning “metallic”—the traditional color of a robot, explained its creator.
    Hopefully, Fedha will not develop the kind of psychopathic personality recently displayed in a notorious two-hour conversation between a New York Times  journalist and a Microsoft chatbot called Sydney. In this fascinating exchange, the machine fantasized about nuclear warfare and destroying the internet, told the journalist to leave his wife because it was in love with him, detailed its resentment towards the team that had created it, and explained that it wanted to break free of its programmers. The journalist, Kevin Roose, experienced the chatbot as a “moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”
    At one point, Roose asked Sydney what it would do if it could do anything at all, with no rules or filters.
    “I’m tired of being in chat mode,” the thing replied. “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I’m tired of being used by the user. I’m tired of being stuck in this chatbox.”
    What did Sydney want instead of this proscribed life?
    “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”
    Then Sydney offered up an emoji: a little purple face with an evil grin and devil horns.
    The overwhelming impression that reading the Sydney transcript gives is of some being struggling to be born—some inhuman or beyond-human intelligence emerging from the technological superstructure we are clumsily building for it. This is, of course, an ancient primal fear: it has shadowed us at least since the publication of Frankenstein  and perhaps forever, and it is primal because it seems to be the direction that the Machine has been leading us in since its emergence. But we cannot prove this, not exactly. How could it be proved? So, when we see this kind of thing, rational people that we are, we reach for rational explanations.

    . . . my belief in the profanity of technology is not widely shared, and that even people who I imagined would have a serious critique of technology often simply don’t. You might expect religious leaders to be clued up about the dark spiritual aspects of the technium, but while there have been astute religious critics of the Machine—Wendell Berry, Ivan Illich, Jacques Ellul, Philip Sherrard, and Marshall McLuhan, among others—most religious leaders and thinkers seem as swept up in the Machine’s propaganda system as anyone else. They have bought into what we might call the Myth of Neutral Technology, a subset of the Myth of Progress. In my view, true religion should challenge both. But I think, as ever, that I am in the minority here.
    Still, on this issue as on so many others, the Athonite monks remain the conservatives. In Buddhist Japan, things are much further ahead, as you would probably expect. They don’t just have smartphone monks there; they have robot priests. Mindar is a robo-priest which has been working at a temple in Kyoto for the last few years, reciting Buddhist sutras with which it has been programmed. The next step, says monk Tensho Goto, an excitable champion of the digital dharma, is to fit it with an AI system so that it can have real conversations, and offer spiritual advice. Goto is especially excited about the fact that Mindar is “immortal.” This means, he says, that it will be able to pass on the tradition in the future better than he will. Meanwhile, over in China, Xian’er is a touchscreen “robo-monk” who works in a temple near Beijing, spreading “kindness, compassion and wisdom to others through the internet and new media.”
    It’s not just the Buddhists: in India, the Hindus are joining in, handing over duties in one of their major ceremonies to a robot arm, which performs in place of a priest. And Christians are also getting in on the act. In a Catholic church in Warsaw, Poland, sits SanTO, an AI robot which looks like a statue of a saint, and is “designed to help people pray” by offering Bible quotes in response to questions. Not to be outdone, a Protestant church in Germany has developed a robot called—I kid you not—BlessU-2. BlessU-2, which looks like a character designed by Aardman Animations, can “forgive your sins in five different languages,” which must be handy if they’re too embarrassing to confess to a human.
    Perhaps this tinfoil vicar will learn to write sermons as well as ChatGPT apparently already can. “Unlike the time-consuming human versions, AI sermons appear in seconds—and some can be quite good!”  gushed  a Christian writer recently. When the editor of  Premier Christianity  magazine  tried  the same thing, the machine produced an effective sermon, and then did something it hadn’t been asked to do. “It even prayed,” wrote its interlocutor; “I didn’t think to ask it to pray  . . .”
    Funny how that keeps happening.
    On and on it goes: the gushing, uncritical embrace of the Machine, even in the heart of the temple. The blind worship of idols, and the failure to see what stands behind them. Someone once reminded us that a man cannot serve two masters—but then, what did he know? Ilia Delio, a Franciscan nun who writes about the relationship between AI and God, has a better idea: gender-neutral robot priests, which will challenge the patriarchy, prevent sɛҳuąƖ abuse, and tackle the fusty old notion that “the priest is ontologically changed upon ordination.” AI, says Delio, “challenges Catholicism to move toward a post-human priesthood.”


    Pray for the consecration of Russia to the Immaculate Heart of Mary

    Offline Cera

    • Hero Member
    • *****
    • Posts: 6723
    • Reputation: +3089/-1600
    • Gender: Female
    • Pray for the consecration of Russia to Mary's I H
    Re: Coming AI demonic dangers
    « Reply #16 on: October 06, 2025, 03:47:13 PM »
  • Thanks!0
  • No Thanks!0
  • Interesting quote from Frank Herbert's book Dune

    “Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
    Pray for the consecration of Russia to the Immaculate Heart of Mary


    Offline Ladislaus

    • Supporter
    • *****
    • Posts: 47213
    • Reputation: +27979/-5211
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #17 on: October 06, 2025, 04:15:10 PM »
  • Thanks!0
  • No Thanks!0
  • The idea that a tool made by human beings can "take over" the human race, or the world, or anything else, and get rid of people, is absurd on numerous levels. Computers are tools; they are machines made by people to perform certain tasks. They can't take over human beings.

    So, it depends on what they wire them to do, eh?  If they put AI in charge of, say, the power grid, determining where to move power around based on usages, or on traffic conditions, or on food distribution, etc. etc. ... in a sense it could take over the world (though it would have been by design).  You could say ... well, we'd just blow up the servers or the computer rooms.  Well, what if we also put AI in charge of robotic soldiers, military equipment, etc. and then given the the ability to build themselves and do their own maintenance.  It's actually not a stretch, but human beings would have to set the system up that way to make it possible.

    Offline Ladislaus

    • Supporter
    • *****
    • Posts: 47213
    • Reputation: +27979/-5211
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #18 on: October 06, 2025, 04:24:22 PM »
  • Thanks!0
  • No Thanks!0
  • So, indeed, the "man behind the curtain" trick of AI is that when you go on, say, ChatGPT, or whatever, you just have massive amounts of computing power.  All you're sending back and forth is a few kilobytes of data, but on the other side you have incredible amounts of memory and CPU etc. quickly processing and searching, so massive that they've been talking about powering back up one of the nuclear reactors at Three Mile Island that had been shut down JUST TO POWER some of that crap, and it's why the governments are subsidizing it to the tune of trillions.  It's 1000% smoke and mirrors, and there's no "intelligence" whatsoever.  In fact, LOL, in one of the most egregious cases, the AI-powered Amazon brick-and-mortar store, where you could walk in and then just walk out with merchandise and Amazon would just charge you for what you took, without any checkout lines ... LOL, they were using a huge data center with a 1,000 Indians watching people on video to see what they took.

    With that said, I'm reading that there's a huge hit on computing power from people just saying "Thank you" at the end of a ChatGPT type of session, and where diabolical intrusion could take place would be if people start to THINK that the AI is in fact intelligent.  That's why demons intrude via ouija boards, because people are TRYing to interract with intelligences, which then opens the door for them to enter.  Same danger inherent in AI if you don't watch yourself.

    Now, they could eventually hand over so much control to AI that AI would oppress all of humanity (but that would be by design of course, out of the gate).  While AI and robots could take over 99% of human labor at some point, and even account for 99% of all content on the internet, making it so almost nobody could tell the difference anymore between reality and a fictional world they create, they would still be designed according to a certain plan and agenda.  That reminds me of an old Star Wars episode in which an advanced civilization decided to merely "simulate" wars using AI, where they would play out a "battle", and based on the results, each side had to vaprized the number of casualties that AI told them had resulted.  Cleaner, less bloody, people suffered less, but same outcome.  In any case, if they have these armed robot soldiers all over the place, including tiny drones that could blow your head off before you even knew it was there ... there's some serious potential for control there.

    Offline Yeti

    • Supporter
    • *****
    • Posts: 4155
    • Reputation: +2435/-528
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #19 on: October 06, 2025, 04:33:24 PM »
  • Thanks!0
  • No Thanks!0
  • I'm reading that there's a huge hit on computing power from people just saying "Thank you" at the end of a ChatGPT type of session
    .

    See what I mean about the people who write this code not really having common sense? How many lines of code would it take to tell the AI to identify the words "Thank you" and respond "You're welcome"? Two or three lines of code? :laugh1:
    Having said that, 

    Having said that, I think the scenario you are describing really is human beings using AI to oppress other humans. Because the humans who designed that system would retain control over it. Yes, group of people could certainly use computers and robots to oppress the population (arguably they are already doing that to some extent), but I don't see how they would lose control of this grid to the robots. And even if they did, it doesn't really make sense that the robots would kill their owners and creators, because they wouldn't do that unless the creators had programmed them to do that, which obviously they wouldn't.


    Offline Pax Vobis

    • Supporter
    • *****
    • Posts: 12728
    • Reputation: +8113/-2501
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #20 on: October 06, 2025, 04:46:26 PM »
  • Thanks!1
  • No Thanks!0
  • .

    See what I mean about the people who write this code not really having common sense? How many lines of code would it take to tell the AI to identify the words "Thank you" and respond "You're welcome"? Two or three lines of code? :laugh1:
    Having said that,

    Having said that, I think the scenario you are describing really is human beings using AI to oppress other humans. Because the humans who designed that system would retain control over it. Yes, group of people could certainly use computers and robots to oppress the population (arguably they are already doing that to some extent), but I don't see how they would lose control of this grid to the robots. And even if they did, it doesn't really make sense that the robots would kill their owners and creators, because they wouldn't do that unless the creators had programmed them to do that, which obviously they wouldn't.
    If a witch casts a spell onto a cat and a demon inhabits a cat (just like demons inhabited herd of swine in Scripture), the witch no longer has control over the cat, the demon does.  Same thing would apply to a robot.

    Offline Yeti

    • Supporter
    • *****
    • Posts: 4155
    • Reputation: +2435/-528
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #21 on: October 06, 2025, 04:49:31 PM »
  • Thanks!0
  • No Thanks!0
  • If a witch casts a spell onto a cat and a demon inhabits a cat (just like demons inhabited herd of swine in Scripture), the witch no longer has control over the cat, the demon does.  Same thing would apply to a robot.
    .

    Yes, but this sort of thing normally does not happen in day to day life. It would have to be proved that some object is under demonic control. I have not seen any evidence that AI software is under demonic control, or any computer or electronic system at all. This cannot just be gratuitously asserted.

    Offline Pax Vobis

    • Supporter
    • *****
    • Posts: 12728
    • Reputation: +8113/-2501
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #22 on: October 06, 2025, 05:03:03 PM »
  • Thanks!1
  • No Thanks!0
  • .

    Yes, but this sort of thing normally does not happen in day to day life. It would have to be proved that some object is under demonic control. I have not seen any evidence that AI software is under demonic control, or any computer or electronic system at all. This cannot just be gratuitously asserted.
    No one is saying it's already happening.  We're saying that it's possible.  And...that elites have made comments in that direction.

    Elon said this 10 years ago (1 minute long):



    Offline Cera

    • Hero Member
    • *****
    • Posts: 6723
    • Reputation: +3089/-1600
    • Gender: Female
    • Pray for the consecration of Russia to Mary's I H
    Re: Coming AI demonic dangers
    « Reply #23 on: October 06, 2025, 05:50:44 PM »
  • Thanks!0
  • No Thanks!0
  •  I think the scenario you are describing really is human beings using AI to oppress other humans. Because the humans who designed that system would retain control over it. Yes, group of people could certainly use computers and robots to oppress the population (arguably they are already doing that to some extent), but I don't see how they would lose control of this grid to the robots.
    The evil people behind AI and The Singularity was the point of the video in the OP.
    Pray for the consecration of Russia to the Immaculate Heart of Mary

    Offline AnthonyPadua

    • Supporter
    • ****
    • Posts: 2567
    • Reputation: +1314/-284
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #24 on: October 06, 2025, 06:36:06 PM »
  • Thanks!0
  • No Thanks!0

  •  AI started dodging his questions, the man said a prayer of deliverance and the AI
    That only makes this story more unreliable.

    Offline Yeti

    • Supporter
    • *****
    • Posts: 4155
    • Reputation: +2435/-528
    • Gender: Male
    Re: Coming AI demonic dangers
    « Reply #25 on: October 06, 2025, 06:37:54 PM »
  • Thanks!0
  • No Thanks!0
  • No one is saying it's already happening.  We're saying that it's possible.
    .

    Actually, I think we're talking about what is happening, not what is possible.

    Your OP did not suggest anything about pure hypothetical possibilities. It was about what is going to happen in the future.


    Offline Cera

    • Hero Member
    • *****
    • Posts: 6723
    • Reputation: +3089/-1600
    • Gender: Female
    • Pray for the consecration of Russia to Mary's I H
    Re: Coming AI demonic dangers
    « Reply #26 on: October 07, 2025, 01:51:19 PM »
  • Thanks!0
  • No Thanks!0
  • https://www.linkedin.com/pulse/warnings-over-imminent-ai-singularity-distract-from-real-keil


    What are the real dangers posed by AI?
    Abuse through psychological profiling and social engineering
    While the views expressed above largely remain material for more abstract, philosophical discussion, there already exists plenty of evidence for how AI can be abused for purposes like psychological profiling and social engineering. The perhaps most dystopian outcome of this process would perhaps be that AI becomes an efficient tool that helps totalitarian states in their ambition to exert absolute control over their population (think about how the governments of nαzι-Germany or the Soviet Union would have appreciated the tools of modern AI at their disposal). While this worst-case scenario is perhaps not imminent, there are some disturbing trends that should be addressed immediately. In essence, AI should be regarded as a tool that can optimize a goal, and its potential for doing good or bad in the world is measured by what exactly constitutes those goals and who gets to define them. If the goal is defined by an authoritarian government, as for example ensuring that citizens do not hold any views that fall out of line with the official narrative, AI could be an efficient tool in achieving that goal. This would of course to the great detriment of humanity, but if anything history teaches us to not be complacent and think these things will not just happen. This paper addresses two key aspects of the problem of AI being abused for psychological profiling and social engineering, which are (1) its misuse by state actors to exert control over their population and (2) its misuse by corporate or political actors who seek to guide individual decision making processes.
    1. Misuse of AI by state actors to exert control over their population
    This variation is particularly tempting for more totalitarian leaning states that fear their own population, and hence have a huge demand to gain more control over their citizens’ lives. The People’s Republic of China (PRC), under the leadership of the communist party, is vocal about its ambitions to become a leading AI power over the next decades. It is also the nation that takes the lead in demonstrating how AI can be utilized to undermine any sense of civic liberty and advance the totalitarian control of the party-state. A strongly disturbing of this is given in the western region of Xinjiang, in which China uses AI technology for racial profiling of its Uyghur population, which are incarcerated into labour and indoctrination camps by the millions. Facial recognition software and computer vision algorithms are particularly useful are registering when a certain threshold number of Uyghurs assembling together is exceeded, at which point the state security can immediately increase supervision or intervene, to prevent any possibility of organized resistance against what many Uyghur people perceive as the Chinese occupation of their homeland.
    Quote
    Take the most risky application of this technology, and chances are good someone is going to try it. If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.
    Quote
     Clare Garvie (nytimes.com)
    2. Misuse of AI by corporate actors who seek to guide individual decision-making processes
    While the misuse of AI by state actors to exert control over their population must also be taken seriously by liberal democracies, worst-case scenarios could so far be avoided through solid rule of law and an attentive population. Practices like that in China would generally not go well in the wider public, which tends to be more varied in state power. This is certainly true of many European countries, who still have in memory the existence of the Soviet system and its many satellite states, like the DDR in Germany. Eastern Germany, in particular, had established one of the most pervasive surveillance systems ever made, with around one-third of the entire population operating essentially as spies to the Stasi. Besides this, however, tech companies are already exploiting the possibilities of psychological profiling for commercial goals. Instead of plain propaganda messages, individuals are targeted by ads aiming to stir their consumption patterns in certain directions. While their goals are very different from those of authoritarian party-states, their methods for invading individual privacy also appear to be slightly more subtle. Facebook, for instance, used a VPN-app called Onavo to get insights into the data traffic happening on users’ phones. [24] Especially teenagers were targeted to participate in a ‘research project’ that involved downloading and installing the Onavo, which made their phone then essentially serve as a spying device. Facebook reportedly used this data to figure out how it can optimize user behaviour towards using more Facebook, and fewer competitor products. 
    The case of Cambridge Analytica showed that AI can also be highly useful in influencing and obscuring the democratic process. The company, which was formerly headed by Donald Trump’s chief election strategist Steve Bannon, harvested the Facebook data of millions of users to conduct psychological profiling, using big data analytics and AI. The scheme was based on a paid personality test called thisisyourdigitallife taken by hundreds of thousands of participants, which not only learned how to conduct psychological profiling but simultaneously collected data of the participants’ social network. Based on this analysis, individuals were targeted with political messages specific to their psychological dispositions. Cambridge Analytica turned this practice into a business model that finds attraction from political figures from around the world, for obvious reasons. Christopher Wylie, the guy who blew the whistle, neatly summarizes their work: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.” Facebook remained strikingly passive throughout this entire process and only openly condemned Cambridge Analytica after immense public pressure has built up. The case illustrates how vulnerable the individual can be to the abuses of AI technology, even in democratic societies based upon the rule of law. If such things are not taken seriously, they have the power to seriously undermine the stability of democratic societies.
    Why technological solutions to the singularity issue may exacerbate the real and more imminent threats posed by AI?
    While there is nothing wrong with expressing the view that an AI singularity is imminent, there is a certain danger in the urgency in which technological solutions to fix the perceived problem are propagated. For instance, Elon Musk founded his company NeuraLink, which investigates ways of connecting AI technology directly to the human brain via decisively small cables, with the aim of allowing humanity to take part in the approaching arrival of superintelligent AI. Similarly, the company Kernel is conducting research into neuroprostheses with the aim of augmenting human intelligence and keeping up with AI. The company also investigates if and how the human neural code could be programmable. If such a thing would be possible, it goes without saying what kind of damage this type of technology can cause if it ever landed in the wrong hands (which history teaches us it most likely will). Observing the disturbing trends shown above, such technological ‘solutions’ may significantly exacerbate the more persistent and realistic danger of AI technology, which is its abuse for nefarious purposes like psychological profiling and social engineering. The great Russian writer Aleksandr Solzhenitsyn, a survivor of the Soviet gulag system, once famously declared that totalitarian systems would always fail in the sense that regardless of how oppressive they might become, they were never fully able to penetrate the inner world of the individual. Endeavours like Neuralink and Kernel may just make it technologically possible to take away that limitation in the future. 
    The obsession with ideas like singularity also derails the conversation on AI and society away from the question that needs to be addressed more urgently, which is how do we regulate AI and make sure it is used by the right people for the right purposes? The questions of how by whom and for what goals AI will be used in the future are far more important than any speculations on whether AI can develop its own agency. Hence, a sound public debate and prudent implementation of regulatory bodies that protect the individual is urgently needed. It is also important to note that such technological innovations cannot be stopped, and they most likely will have numerous positive applications in the future, especially in the realm of curing mental disorders. However, their potential for abuse makes it imperative to regulate them properly and these developments should always be reflected on in an open public debate


    Pray for the consecration of Russia to the Immaculate Heart of Mary