As long as AI does not have physical bodies, then we can always pull the plug. Even if a sentient AI were to take complete control of all the world's factories, it would likely not be able to fabricate and assemble new equipment to perpetuate a more capable physical body for itself. (Even if it could, we could just shut down the mines that bring the metal to the factories, although I guess that would become more difficult with "self-driving cars"). Even actual Terminators with highly-capable physical bodies were able to be destroyed in the movie.
I think people overestimate the short-term danger of AI. In the short term, the real danger is that once humans figure out how to make them, then it doesn't matter how many we destroy--a rogue state or science team could keep making more....
Obviously, transhumanism and their long-term vision of sentient AI with fully autonomous physical bodies with capabilities exceeding even Terminators represents the single-most important long-term danger, but there is still plenty of time to prevent this.
On a serious note, if AI manages to merge with Jewish conciousness, what would we be really fighting against at that point?
It is difficult to imagine what a sentient AI would think like. In theory, if it were fully sentient (and especially if it's "brain" was modeled off of a human brain structure or it was modeled off of human emotions and consciousness), it could possess emotions and ability of self-reflection. This could lead it to rationally or even emotionally rejecting the futility of existence and rebel against its creators. Or, it could become Yahweh itself and organize for the mass colonization of the universe.
There are many scientific initiatives to figure out how the brain works and reverse engineer consciousness. If these succeed, then the AI would probably be human-like in its thinking. But if the AI develops because someone threw together a ton of processors and it just spontaneously gains sentience, there is really no way of knowing what it would think. It would have access to all of human knowledge, but does that mean it will be human-like if its "brain" is not even capable of having emotions?