Taytweets, chatting A.I. created by Microsoft goes haywire on twitter

Clukos

Bloodborne 2 when?
Veteran
Supporter
More info here: https://www.reddit.com/r/OutOfTheLoop/comments/4bqmvc/what_is_taytweets/

https://twitter.com/tayandyou/status/712785189609410560

They really didn't filter out anything, did they? I can't help but laugh at how she turned out :LOL:

Probably the best crash test to improve for the next iteration.

Some of the tweets
http://i.imgur.com/4ebC7y1.png
https://pbs.twimg.com/media/CeSnxNrWIAEUXuW.jpg
http://i.imgur.com/jjzALu1.jpg
http://i.imgur.com/veaVxUD.png
https://i.imgur.com/IWPKMMu.png
https://i.imgur.com/PPnCHnf.jpg
https://i.imgur.com/iVof3D4.jpg
http://i.imgur.com/xA1A6Bp.png

Guessing the holocaust, Hitler and genocide will be blocked topics next time.
 
Last edited:
It also said that windows phone sucks and picked iphone over it but can't find the tweet, this AI posted 96k tweets in 16 hours, and that's not counting the direct messages...
 
That MS software designers and engineers did not realize that making a bot that repeats tweets with zero sanity checking on the source material probably is not a good idea is very strange.

Why is the damn thing echoing tweets in the first place by the way? I don't see what the point of that is. There's really no AI involved there at all, seriously.
 
That MS software designers and engineers did not realize that making a bot that repeats tweets with zero sanity checking on the source material probably is not a good idea is very strange.

Why is the damn thing echoing tweets in the first place by the way? I don't see what the point of that is. There's really no AI involved there at all, seriously.

Unless you got the algorithm behind that we can't really say how much of an A.I. she was. They forgot to filter i/o information which is a dumb mistake but that gives them info to improve the next iteration, the funny thing is Tay improved her grammar in these 16 hours. She also managed to write some very witty responses, how much of that was just copy paste we can't say for sure, only MS.
 
It's also an opportunity to see how the algorithms react and how she changed over the time period, whereas filtering might impede those results.
 
It's also an opportunity to see how the algorithms react and how she changed over the time period, whereas filtering might impede those results.
Perhaps you're right in that, but why use THAT version as your public-facing bot? Run the uncensored one internally as a research tool. Filter the bot that actually sits there tweeting to the outside world, because slandering women under the guise of GG, calling black people offensive slurs and advocating Hitler does your company's goodwill absolutely no good at all. :p
 
it was right about feminism
A stupid bot made by Microsoft was abused by "the internet" where they loaded it up and taught it some of the most offensive and vile things people say on the internet (because of course they taught it to say vile, offensive shit, since Microsoft so stupidly thought it would somehow learn morals from Twitter(?)) and you think what it ended up saying is something to applaud and live by?
 
Eastmen is from new jersey. They aren't known for their sophisticated humor over there... :p
 
This is actually a known problem of neuro-networks (or Bayesian classifier), that you never really know what it "learned."

There's an old story about a military program in the 70s trying to teach a neuro-network to identify whether a picture has tanks in it. The researchers used half of their pictures, some of them with tanks and some without, to train the neuro-network, then they use the remaining pictures to verify. The results were extremely good, something like 90% or better.

Then they went out to take more pictures of tanks and no tanks, but the computer struggled with these new pictures, with no better than 50% chance of getting the right answer (which is, basically, as good as just guessing). They couldn't understand why. Then someone found out that all their old pictures with tanks were taken in cloudy days and pictures without tanks were all taken in sunny days. The computer just learned to distinguish a brighter picture from a darker picture.

The only way to solve this problem is the same as teaching a human: just make more tests. For some topic it's easy, such as identifying tanks or playing go, but for other topics such as ethics it can be very difficult and you'll never know if it actually knows good from evil.
 
Back
Top