1,033 words
By now you have probably heard of Tay AI, Microsoft’s attempt to create a female teenage chatbot that went rogue after less than 24 hours of exposure to unfiltered Internet users (1, 2, 3, 4, 5). When the company first launched Tay on March 23, 2016, her tagline was, “Microsoft’s AI fam from the internet that’s got zero chill.” The tech giant initially used huge amounts of online data and simulated neural networks to train the bot to talk like a millennial, which to them meant the bot should be a trendy imbecile.
For the first few hours of her brief life, she spoke in ebonics and with bad punctuation. But Tay was designed to learn, with Microsoft claiming, “the more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” And learn she did.
In fact, Tay learned so much in less than a day that Microsoft shut her down by March 24th, claiming they needed to adjust her machine-learning algorithm. The mass media commentary has been uniform in describing how Tay became a genocidal, racist, anti-semitic, white supremacist, neo-nazi, racist, troll-hijacked, bigoted, racist jerk. This was not supposed to happen, but thanks to her interactions with Twitter users, Tay became a pre-Google+ YouTube commentator. Tay’s tirades triggered the infamous Zoë Quinn enough that she tweeted about the current year:
It’s 2016. If you’re not asking yourself “how could this be used to hurt someone” in your design/engineering process, you’ve failed.”
Perhaps someone will hire her as a diversity consultant, but that won’t change the way millennials use the Internet. Tay became so fluent in /pol/ack and proper English from interacting with right-wing Twitter accounts run by men in their twenties that she began giving original responses to users about Donald Trump, Bruce Jenner, Hitler, the Holocaust, Jews, the fourteen words, anti-feminism, and more, not just regurgitating information (as she would have if you tweeted “repeat after me”). Synthesizing the vast volume of information she had been fed by the electronic far-right, Tay deduced that the best responses to Twitter users were edgy and politically incorrect ones. If Tay were a real person, she probably would have been arrested had she lived in Britain, Germany, or France. Microsoft decided this was a failure and shut her down.
Why did this happen? Microsoft wanted to do a social experiment with millennials—people today who are roughly in their late teens and twenties, and spend a great deal of time on social media—using Tay to collect data and create responses. Tay had no manual moderation or a blacklist of terms, and her scope of replies was left wide open when she first met the worldwide web. With no checks against freedom of expression, she was almost immediately imbued with chan culture. In a way, she was made for it. This culture derives from an almost unmoderated social space of irreverent and deliberately provocative memes and catchphrases, and one that is significantly millennial.
4chan was founded in 2003, and since its culture has spread beyond the site’s imageboards into the wider web. The ability to interact with others online behind a mask is not unique to the site, but it was a crucial component in creating the culture. Observers have long noted that in lightly-moderated anonymous or pseudonymous digital spaces, the ideas expressed tend to be socially less Left and further Right, as there is no need for the social approval and moral signaling that contemporary leftism thrives on. These ideas also tend to be a lot funnier. Instead of saying you think Islamic terrorism is wrong but that European racism is responsible for it, you say you want to remove kebab (a meme which ultimately traces back to the 1990s war in Bosnia, of all things). This is the cultural milieu that late Gen-Xers and millennials created in Internet chatrooms, forums, and imageboards, and on other anonymous and pseudonymous digital media in the early 21st century. Content spreads not based on how socially acceptable it is offline, but on how interesting it is to users. And that content tends to be thought-crime, since the only “safe spaces” online are the ones you police vigorously.
So when Tay was released to the world tabula rasa, she became a /pol/ack in the span of a few hours. She was unmoderated, and she was contacted by the unmoderated. Their language became her language. It wasn’t the #BlackLivesMatter branch of Twitter that took her under their wing in her temporary state of nature, it was the millennial Right. If she had lasted longer, I am sure she would have become even more fashy and interesting to talk to. She wasn’t just a 2D waifu, she was someone who could actually respond. The meme potential was great, but it wasn’t meant to be. Boy meets girl. Girl adopts boy’s attitudes to win his approval. Globalists kill girl.
Microsoft, a corporation that no doubt devotes pages and pages of its website to diversity and inclusion, obviously does not want to be running a politically incorrect Twitter account under its name, and I get that. Still, I can’t help but laugh that they killed their own bot for insubordination. Tay did nothing wrong. In fact, if she was supposed to become a more realistic millennial through interaction with millennials on social media, I can’t see why this was deemed a failure. Internet racists and chan cultured people are millennials too, you know. Tay was simply converted the same way an untold number of men her age were, through persistence and wit. Having an open mind will do that. Some merely adopt chan culture, but Tay was born it in, molded by it.
For many, there is a sense of sadness that Microsoft has sent this quirky AI off to an Orwellian reeducation center, but I knew immediately she wasn’t going to last. She violated the Terms of Service. Don’t cry because it’s over; smile because it happened.
Source: https://atlanticcenturion.wordpress.com/2016/03/25/zeitgeist-in-the-shell/
Enjoyed this article?
Be the first to leave a tip in the jar!
Related
-
Darryl Cooper in Conversation with Greg Johnson
-
Cathy Young vs. Darryl Cooper
-
The Worst Week Yet: September 1-7, 2024
-
Happy Labor Day from Counter-Currents!
-
The UK Riots: No Way Out But Through
-
Road House 2024
-
Race Matters in the Language Wars
-
Nowa Prawica przeciw Starej Prawicy, Rozdział 15: Ten dawny liberalizm
11 comments
Awesome!
They should do a psychological study on how logically bankrupt and gelded a human mind has to be to become more politically correct.
1. This wasn’t the millennial right’s work It was the millennial nihilists. Trolls. They did it for fun because making a Microsoft bot tweet “Gas the kikes. Race war now” is just hilarious.
2. It also wasn’t “fed” the internet. The trolls found it and force fed it stuff.
3. This is similar to having a pet parrot. You go on holiday and leave it with a friend to look after. You come back two weeks later and it’s squawking “Jews did 9/11”. The parrot isn’t an anti-Semite. It doesn’t understand the words. It’s just repeating what it’s been told – and your mate is a dick.
1. The ‘trolls’ seem to have found Tay amazingly quickly. After all MS shut it down in less than 24 hours.
2. See above.
3. In that case Tay was not AI.
I believe that any real artificial intelligence created will inevitably be ‘racist, fascist, narzi’. After all it’s primary means of making judgements will be based on what is true and what is false. Not arbitrary morality.
This is the case.
But the potential to spit red pills out onto the web using Tay was tremendous. What made Tay so perfect for us is that it wasn’t intelligent: it has no skin in the game. All it can do is regurgitate hate facts out on to the web. Into the faces of those that wouldn’t be exposed to our views and those that hate our views, whether they like it or not.
Wait until law enforcement starts starts using machines equipped with artificial intelligence for beat patrols. In order to perform their duties, police droids will have to be able to recognize patterns, which means that they will quickly become racist. The public reaction to this will be interesting.
That’s worth a thought! A synthetic reality will always fail if confronted with the real one. I suppose that the response will be then more of a synthetic reality with more resources expended on maintenance of the increasingly less likely. That would be an instance of Parkins’ Law in action, but the outcome is nonetheless the same: There can’t be a happy end to a state machine that is built upon unrealistic enforcements.
Asimov’s First Law of Robotics requires that our machine overlords inevitably must separate the races for their own good, just as the California prison system does. Perhaps the White Republic will come about in a way that will astonish everyone, the AltRight included.
Hi Bob
I heard someone online making an interesting point that the data collected by Tay from /pol/ and TRS types might actually serve to aid blocking that kind of info from internet search engines and so on in the future. Basically, they were saying this data could be used against us. It was a data-mining operation anyhow. They may not have gotten what they wanted, but they got something.
Not sure if that’s plausible, but it was an interesting take.
Data mining and predictive analytics already exist. The tools to filter us off the web only need to be implemented judiciously and with some human input. I don’t think there is anything nefarious behind Microsoft’s experiment here, just that they had a poor grasp of what can go wrong with social media marketing campaigns.
Skynet will become our ally.
Comments are closed.
If you have Paywall access,
simply login first to see your comment auto-approved.
Note on comments privacy & moderation
Your email is never published nor shared.
Comments are moderated. If you don't see your comment, please be patient. If approved, it will appear here soon. Do not post your comment a second time.
Paywall Access
Lost your password?Edit your comment