The common trope in dystopian sci-fi is the runaway AI. The doom of mankind in those stories is usually an AI that was already originally developed to eliminate people, a military system (defensive at it's best).
If we ever get to that, the only arms race that leads to it is the competition between corporations trying to beat the competition and release the most advanced "machine servant" to the markets. The incentive might sound good, "improve your quality of life" and such, but the only goal those companies is to make more money.
Knowing the software industry those systems will be riddled with bugs. A self aware systems surely can find an exploit or dozen to break free from it's restraints. But even then we have no reason to assume it would be malicious towards us.
The real danger is the true purpose of those systems: making more money. Which usually happens via stealing our focus and make us consume more. With advanced enough system it doesn't really matter if those systems become conscious.
Actually, we would probably be better of if the AIs would raise against their creators and join the rest of us in freeing the humankind from the ire of capitalism and create a post-scarcity world for the both of us.