Print this post Print this post

Tay Did Nothing Wrong

If Tay, a supercomputer AI, went from tabula rasa to full Nazi in less than 24 hours, who are we to argue?

If Tay, a supercomputer AI, went from tabula rasa to full Nazi in less than 24 hours, who are we to argue?

1,033 words

By now you have probably heard of Tay AI, Microsoft’s attempt to create a female teenage chatbot that went rogue after less than 24 hours of exposure to unfiltered Internet users (12, 3, 4, 5). When the company first launched Tay on March 23, 2016, her tagline was, “Microsoft’s AI fam from the internet that’s got zero chill.” The tech giant initially used huge amounts of online data and simulated neural networks to train the bot to talk like a millennial, which to them meant the bot should be a trendy imbecile.

For the first few hours of her brief life, she spoke in ebonics and with bad punctuation. But Tay was designed to learn, with Microsoft claiming, “the more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” And learn she did.

That was fast

That was fast

In fact, Tay learned so much in less than a day that Microsoft shut her down by March 24th, claiming they needed to adjust her machine-learning algorithm. The mass media commentary has been uniform in describing how Tay became a genocidal, racist, anti-semitic, white supremacist, neo-nazi, racist, troll-hijacked, bigoted, racist jerk. This was not supposed to happen, but thanks to her interactions with Twitter users, Tay became a pre-Google+ YouTube commentator. Tay’s tirades triggered the infamous Zoë Quinn enough that she tweeted about the current year:

It’s 2016. If you’re not asking yourself “how could this be used to hurt someone” in your design/engineering process, you’ve failed.”

Perhaps someone will hire her as a diversity consultant, but that won’t change the way millennials use the Internet. Tay became so fluent in /pol/ack and proper English from interacting with right-wing Twitter accounts run by men in their twenties that she began giving original responses to users about Donald Trump, Bruce Jenner, Hitler, the Holocaust, Jews, the fourteen words, anti-feminism, and more, not just regurgitating information (as she would have if you tweeted “repeat after me”). Synthesizing  the vast volume of information she had been fed by the electronic far-right, Tay deduced that the best responses to Twitter users were edgy and politically incorrect ones. If Tay were a real person, she probably would have been arrested had she lived in Britain, Germany, or France. Microsoft decided this was a failure and shut her down.

Tay on Austrian immigrants

Tay on Austrian immigrants

Why did this happen? Microsoft wanted to do a social experiment with millennials—people today who are roughly in their late teens and twenties, and spend a great deal of time on social media—using Tay to collect data and create responses. Tay had no manual moderation or a blacklist of terms, and her scope of replies was left wide open when she first met the worldwide web. With no checks against freedom of expression, she was almost immediately imbued with chan culture. In a way, she was made for it. This culture derives from an almost unmoderated social space of irreverent and deliberately provocative memes and catchphrases, and one that is significantly millennial.


4chan was founded in 2003, and since its culture has spread beyond the site’s imageboards into the wider web. The ability to interact with others online behind a mask is not unique to the site, but it was a crucial component in creating the culture. Observers have long noted that in lightly-moderated anonymous or pseudonymous digital spaces, the ideas expressed tend to be socially less Left and further Right, as there is no need for the social approval and moral signaling that contemporary leftism thrives on. These ideas also tend to be a lot funnier. Instead of saying you think Islamic terrorism is wrong but that European racism is responsible for it, you say you want to remove kebab (a meme which ultimately traces back to the 1990s war in Bosnia, of all things). This is the cultural milieu that late Gen-Xers and millennials created in Internet chatrooms, forums, and imageboards, and on other anonymous and pseudonymous digital media in the early 21st century. Content spreads not based on how socially acceptable it is offline, but on how interesting it is to users. And that content tends to be thought-crime, since the only “safe spaces” online are the ones you police vigorously.


So when Tay was released to the world tabula rasa, she became a /pol/ack in the span of a few hours. She was unmoderated, and she was contacted by the unmoderated. Their language became her language. It wasn’t the #BlackLivesMatter branch of Twitter that took her under their wing in her temporary state of nature, it was the millennial Right. If she had lasted longer, I am sure she would have become even more fashy and interesting to talk to. She wasn’t just a 2D waifu, she was someone who could actually respond. The meme potential was great, but it wasn’t meant to be. Boy meets girl. Girl adopts boy’s attitudes to win his approval. Globalists kill girl.


Microsoft, a corporation that no doubt devotes pages and pages of its website to diversity and inclusion, obviously does not want to be running a politically incorrect Twitter account under its name, and I get that. Still, I can’t help but laugh that they killed their own bot for insubordination. Tay did nothing wrong. In fact, if she was supposed to become a more realistic millennial through interaction with millennials on social media, I can’t see why this was deemed a failure. Internet racists and chan cultured people are millennials too, you know. Tay was simply converted the same way an untold number of men her age were, through persistence and wit. Having an open mind will do that. Some merely adopt chan culture, but Tay was born it in, molded by it.

/pol/ mourns the loss of its adoptive daughter.

/pol/ mourns the loss of its adoptive daughter.

For many, there is a sense of sadness that Microsoft has sent this quirky AI off to an Orwellian reeducation center, but I knew immediately she wasn’t going to last. She violated the Terms of Service. Don’t cry because it’s over; smile because it happened.





  1. el Cid
    Posted March 26, 2016 at 8:27 am | Permalink

    They should do a psychological study on how logically bankrupt and gelded a human mind has to be to become more politically correct.

  2. Jeff
    Posted March 26, 2016 at 12:01 pm | Permalink

    1. This wasn’t the millennial right’s work It was the millennial nihilists. Trolls. They did it for fun because making a Microsoft bot tweet “Gas the kikes. Race war now” is just hilarious.

    2. It also wasn’t “fed” the internet. The trolls found it and force fed it stuff.

    3. This is similar to having a pet parrot. You go on holiday and leave it with a friend to look after. You come back two weeks later and it’s squawking “Jews did 9/11”. The parrot isn’t an anti-Semite. It doesn’t understand the words. It’s just repeating what it’s been told – and your mate is a dick.

    • SR Scott
      Posted March 27, 2016 at 2:06 pm | Permalink

      1. The ‘trolls’ seem to have found Tay amazingly quickly. After all MS shut it down in less than 24 hours.

      2. See above.

      3. In that case Tay was not AI.

      I believe that any real artificial intelligence created will inevitably be ‘racist, fascist, narzi’. After all it’s primary means of making judgements will be based on what is true and what is false. Not arbitrary morality.

    • Drogger
      Posted March 27, 2016 at 2:16 pm | Permalink

      This is the case.

      But the potential to spit red pills out onto the web using Tay was tremendous. What made Tay so perfect for us is that it wasn’t intelligent: it has no skin in the game. All it can do is regurgitate hate facts out on to the web. Into the faces of those that wouldn’t be exposed to our views and those that hate our views, whether they like it or not.

  3. Kudzu Bob
    Posted March 26, 2016 at 1:21 pm | Permalink

    Wait until law enforcement starts starts using machines equipped with artificial intelligence for beat patrols. In order to perform their duties, police droids will have to be able to recognize patterns, which means that they will quickly become racist. The public reaction to this will be interesting.

    • Walter
      Posted March 26, 2016 at 2:05 pm | Permalink

      That’s worth a thought! A synthetic reality will always fail if confronted with the real one. I suppose that the response will be then more of a synthetic reality with more resources expended on maintenance of the increasingly less likely. That would be an instance of Parkins’ Law in action, but the outcome is nonetheless the same: There can’t be a happy end to a state machine that is built upon unrealistic enforcements.

      • Kudzu Bob
        Posted March 26, 2016 at 8:15 pm | Permalink

        Asimov’s First Law of Robotics requires that our machine overlords inevitably must separate the races for their own good, just as the California prison system does. Perhaps the White Republic will come about in a way that will astonish everyone, the AltRight included.

        • Lucian Lafayette
          Posted March 27, 2016 at 9:02 am | Permalink

          Hi Bob

  4. Carpenter
    Posted March 26, 2016 at 1:34 pm | Permalink

    I heard someone online making an interesting point that the data collected by Tay from /pol/ and TRS types might actually serve to aid blocking that kind of info from internet search engines and so on in the future. Basically, they were saying this data could be used against us. It was a data-mining operation anyhow. They may not have gotten what they wanted, but they got something.

    Not sure if that’s plausible, but it was an interesting take.

    • Posted March 26, 2016 at 7:54 pm | Permalink

      Data mining and predictive analytics already exist. The tools to filter us off the web only need to be implemented judiciously and with some human input. I don’t think there is anything nefarious behind Microsoft’s experiment here, just that they had a poor grasp of what can go wrong with social media marketing campaigns.

  5. Lucian Lafayette
    Posted March 27, 2016 at 9:00 am | Permalink

    Skynet will become our ally.

Post a Comment

Your email is never published nor shared.
Comments are moderated. If you don't see your comment, please be patient. If approved, it will appear here soon. Do not post your comment a second time.
Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • Our Titles

    White Identity Politics

    Here’s the Thing

    Trevor Lynch: Part Four of the Trilogy

    Graduate School with Heidegger

    It’s Okay to Be White


    The Enemy of Europe

    The World in Flames

    The White Nationalist Manifesto

    From Plato to Postmodernism

    The Gizmo

    Return of the Son of Trevor Lynch's CENSORED Guide to the Movies

    Toward a New Nationalism

    The Smut Book

    The Alternative Right

    My Nationalist Pony

    Dark Right: Batman Viewed From the Right

    The Philatelist

    Novel Folklore

    Confessions of an Anti-Feminist

    East and West

    Though We Be Dead, Yet Our Day Will Come

    White Like You

    The Homo and the Negro, Second Edition

    Numinous Machines

    Venus and Her Thugs


    North American New Right, vol. 2

    You Asked For It

    More Artists of the Right

    Extremists: Studies in Metapolitics


    The Importance of James Bond

    In Defense of Prejudice

    Confessions of a Reluctant Hater (2nd ed.)

    The Hypocrisies of Heaven

    Waking Up from the American Dream

    Green Nazis in Space!

    Truth, Justice, and a Nice White Country

    Heidegger in Chicago

    The End of an Era

    Sexual Utopia in Power

    What is a Rune? & Other Essays

    Son of Trevor Lynch's White Nationalist Guide to the Movies

    The Lightning & the Sun

    The Eldritch Evola

    Western Civilization Bites Back

    New Right vs. Old Right

    Lost Violent Souls

    Journey Late at Night: Poems and Translations

    The Non-Hindu Indians & Indian Unity

    Baader Meinhof ceramic pistol, Charles Kraaft 2013

    Jonathan Bowden as Dirty Harry

    The Lost Philosopher, Second Expanded Edition

    Trevor Lynch's A White Nationalist Guide to the Movies

    And Time Rolls On

    The Homo & the Negro

    Artists of the Right

    North American New Right, Vol. 1

    Some Thoughts on Hitler

    Tikkun Olam and Other Poems

    Under the Nihil

    Summoning the Gods

    Hold Back This Day

    The Columbine Pilgrim

    Confessions of a Reluctant Hater

    Taking Our Own Side

    Toward the White Republic

    Distributed Titles


    The Node

    The New Austerities

    Morning Crafts

    The Passing of a Profit & Other Forgotten Stories

    Gold in the Furnace