Twitter bot learns from users to be racist
Less than 24 hours after @TayandYou went online, Microsoft halted account's posting
New York
Microsoft set out to learn about "conversational understanding" by creating a bot designed to have automated discussions with Twitter users, mimicking the language that they use.
What could go wrong? If you guessed "It will probably become really racist," you've clearly spent time on the Internet. Less than 24 hours after the bot, @TayandYou, went online on Wednesday, Microsoft halted posting from the account and deleted several of its most obscene statements.
The bot, developed by Microsoft's technology and research and Bing teams, got major assistance in being offensive from users who egged it on. It disputed the existence of the Holocaust, referred to women and minorities with unpublishable words and advocated genocide. Several of the tweets were sent after users commanded the bot to repeat their own statements, a…
BT is now on Telegram!
For daily updates on weekdays and specially selected content for the weekend. Subscribe to t.me/BizTimes
Technology
'Harvesting data': Latin American AI startups transform farming
After long peace, Big Tech faces US antitrust reckoning
Tech’s cash crunch sees creditors turn ‘violent’ with one another
Tech millionaires chase billionaire tax shields with ‘swap fund’
Elon Musk’s Starlink profits are more elusive than investors think
Hollywood animation, VFX unions fight AI job cut threat