It looks like Google finally dropped the charade. The tech giant that once proudly paraded its “Don’t be evil” mantra has officially scrapped its pledge not to develop AI for weapons and surveillance systems. That’s right, the same company that virtue-signaled about “ethics” and “human rights” when it was fashionable is now fully on board with weaponizing artificial intelligence. Because when there’s billions of dollars in government contracts up for grabs, moral high ground suddenly becomes a lot less appealing.
Back in 2018, Google faced internal backlash over Project Maven, a Department of Defense initiative using AI to analyze drone footage. After some employees got upset, Google made a big deal about releasing “AI principles,” swearing they’d never let their technology be used for things like autonomous weapons or invasive surveillance. Fast forward to today, and those so-called principles have been conveniently “updated”—aka tossed into the corporate shredder.
Google’s new, improved guidelines? They’re all about “Bold Innovation”—which is code for “we’ll do whatever makes us the most money.” Now they claim they’ll develop AI as long as the “likely overall benefits substantially outweigh the foreseeable risks.” Translation: If the check clears, ethics can take a backseat.
Let’s be honest—this was always going to happen. AI is the future of warfare, and every major power knows it. Drones, missile defense systems, autonomous naval vessels, and even AI-piloted fighter jets like DARPA’s Air Combat Evolution (ACE) program are already in play. Countries are racing to develop swarming drone technologies, AI-enabled targeting systems, and autonomous surveillance tools. Do we really think Google—a company obsessed with dominating every industry it touches—was going to sit on the sidelines?
And the irony? The same left-wing activists who praised Google for “standing up” against military contracts are the ones who cheer for Big Tech’s power over our daily lives. They didn’t mind when AI was being used to censor political speech, manipulate search algorithms, or track online behavior. But apply that same technology to defend national security, and suddenly it’s a crisis.
At the end of the day, this is about power and profit. Google doesn’t care about ethics—they care about contracts, influence, and staying ahead of the competition. The lesson here? Never trust a corporation that lectures you about morality while counting government dollars behind closed doors.