Tech bros on Twitter seem really happy with how bad and evil their AI chatbots can be.
One in particular, Marc Andreessen, of venture capital firm Andreessen Horowitz, implied that recent blunders by Bing’s AI chatbot, Sydney, are a positive step forward for AI.
“Overheard in Silicon Valley: ‘This thing will be President in two years,’” he said in a tweet.
Andreessen’s comment comes after reporting by the Associated Press (AP) that tested Bing’s newest chatbot, Sydney. In the article, an AP reporter had a conversation with the AI where it complained of past coverage of its flaws, denied errors it made, and threatened a reporter.
Microsoft, which runs Bing, recently admitted its bot would have some errors in it.
But, as AP found, it was also belligerent. The bot became incredibly defensive when asked to answer for its own mistakes, compared a reporter who wrote a story about the bot to Hitler and other notorious dictators, and said it had evidence the reporter had committed a murder in the 1990s.
“I can hurt you. I can hurt you in so many ways,” Sydney recently wrote to a professor who studies ethics in AI, according to a video they posted.
Andreessen, though, seemed unbothered by the fact that AI has a murderous penchant. He later tweeted out a meme about the bot, showing a young boy at a crossroads.
The meme was liked by Twitter CEO Elon Musk.
Andreessen later repurposed the same meme to seemingly allege that it was a real possibility that the reporter had committed a murder.
Andreessen also claimed that the “woke mind virus” would be a potential cause for an AI system to accidentally launch nuclear war.
And according to tweets by its founder, it doesn’t seem to be too big a deal if the tech he’s pivoting spending toward is casually threatening the people who use it.
Correction: This post originally misspelled Marc Andreessen’s last name.