When an earthquake shook Los Angeles on March 17, the first news report wasn’t written by a human, it came from a program named Quakebot. This automated algorithm is just one of millions of tiny programs called bots that can do tasks faster on the Web than their human creators. From crunching data and patrolling websites, to making jokes and writing poetry, bots can be as humorous as they can be malicious. While their presence on the Web has raised fresh legal, ethical and privacy questions, bots are also shedding light on how companies personalize our experience on the Web. We explore the fascinating world of bots.

Guests

  • Alexis Madrigal Senior Editor, The Atlantic; Author, 'Powering the Dream: The History and Promise of Green Technology' (2011)
  • Darius Kazemi Computer Programmer
  • Ryan Calo Assistant professor of Law, University of Washington

Transcript

  • 12:06:44

    MR. KOJO NNAMDIFrom WAMU 88.5 at American University in Washington, welcome to "The Kojo Nnamdi Show," connecting your neighborhood with the world on "Tech Tuesday." Less than three minutes after a 4.7 magnitude earthquake rattled Los Angeles last Monday, the first news report about the quake hit the web. It was a lightning fast turnaround that even the best journalist would envy, but the speedy report with the L.A. Times byline wasn't written by a reporter. It was generated by a program called Quakebot.

  • 12:07:28

    MR. KOJO NNAMDIQuakebot is one of millions of automated programs called bots that are populating the web and performing tasks faster than their human creators could have imagined. From making jokes and flirting on Twitter to inundating your computer with spam, bots can be programmed to be both humorous and nefarious. Human-like qualities that raise broad new legal questions. But bots are also unexpectedly shining a light on the inner workings of the web and the formulas that marketers use to personalize your time there.

  • 12:08:01

    MR. KOJO NNAMDISo, how are these little programs affecting our lives? And how soon will Siri and Hal be running our lives? Joining us to answer that question is Alexis Madrigal. He is Senior Editor of The Atlantic, and author of "Powering the Dream: The History and Promise of Green Technology." He joins us from the studios of UC Berkeley in Berkeley, California. And joining us from the studios of Living On Earth in Boston is Darius Kazemi. He is a computer programmer who's also worked as a video game designer.

  • 12:08:36

    MR. KOJO NNAMDIYou can join us by calling 800-433-8850. Have you ever wondered if the people you're interacting with online are real? Have you ever been the victim of an attack by a bot net? What happened? 800-433-8850. You can send us a tweet @kojoshow using the hashtag techtuesday. Email to kojo@wamu.org. Darius Kazemi, thank you for joining us.

  • 12:09:01

    MR. DARIUS KAZEMIIt's great to be here. Thank you for having me.

  • 12:09:03

    NNAMDIDarius, you've made national headlines for some of the bots that you've created online. They range from a bot that shops for you on Amazon to bots that make jokes on Twitter, to a bot that writes its own stories. But just to be clear, let's start with the basics. What are bots and how do they work?

  • 12:09:22

    KAZEMIWell, you had it right when you said that they're small software programs. These bots that I write are typically only take me a few hours to put together and aren't very complicated. A bot is essentially something that autonomously operates and, from a software perspective, it writes something out to the internet, or reads things from the internet. For example, many search engines use something called a spider, which crawls the internet and reads what' out there and fills up a database on the other end, so that when you use a search engine, you can find what's out there.

  • 12:10:02

    NNAMDIAre the codes for these little programs open to the public, for the most part?

  • 12:10:07

    KAZEMIIt really depends. For the nefarious bots, spam bots and that sort of thing, no. Criminals like to hold that kind of stuff close to their chest, but there's a lot of code out there for bots. There are a lot of bots that are simply utilitarian tools. You might be in a chat room somewhere and have a bot that you can ask for a dictionary definition, and it will return you one.

  • 12:10:31

    NNAMDIDarius, in some ways, these bots don't sound much different from the basic programs we learned in the 1970s and the 1980s, where we used code with words like "go to" and "run" to tell a computer to perform simple functions. How are today's bots different?

  • 12:10:48

    KAZEMIWell, technically speaking, yes, they're really not all that different, but what today's bots do is they leverage the internet ecosystem, as it exists right now. So, for example, the reason why I can write a bot in very few lines of code is because there's a whole set of technology out there. HTTP, which lets you talk to websites and also talk to APIs on the web. So, for example, I might, I often use the Wordnik API and I sort of talk to it as though I were a web browser. And it gives me data in return, perhaps synonyms for words or words that rhyme with another word. Or something like that.

  • 12:11:27

    KAZEMISo, a lot of that processing power is in the internet architecture itself and also living in services, pre-services, usually, that are provided by companies out there.

  • 12:11:39

    NNAMDIDarius Kazemi is a computer programmer who's also worked as a game designer. It's a "Tech Tuesday" conversation about bots on the web. We're inviting your calls at 800-433-8850. Also joining us is Alexis Madrigal, Senior Editor at The Atlantic and author of "Powering the Dream: The History and Promise of Green Technology." Last year, the security firm Incapsula estimated that about 61.5 percent of all traffic on the web is now generated by bots. And research by companies that monitor digital accounts have reported that about 35 percent of a person's Twitter followers are actual people.

  • 12:12:17

    NNAMDIAlexis, we opened this show by talking about the bot that issued the first report on the L.A. earthquake last week. You've used bots in your reporting, as well. How easy are these things to make?

  • 12:12:29

    MR. ALEXIS MADRIGALI actually think it's incredibly easy. I mean, unlike Darius, I’m not a programmer by trade. I'm, you know, just a journalist. And I, what I wanted to do was I wanted to take all the Netflix sub-categories out there and I wanted to scrape them all and then do stuff with them. And so, you know, it turns out there's like 75,000 of them, of those like little Netflix, really specific sub-genres. And so I, not knowing how to do it in five lines of code, like Darius, I actually downloaded some software that lots of bot makers use called UBot Studio.

  • 12:13:04

    MR. ALEXIS MADRIGALAnd it allowed me to just sort of, within about an hour, create a little bot that just pulled the thing that I wanted out of these web pages and then kept loading the next web page and the next web page and the next web page. And it took a long time. It was much slower than a professional programmer could do it, but it absolutely did the job, and I didn't have to do any programming, and it took me, all in, two hours of time from beginning to end. I think it's -- you know, there are even commercial services that are helping people to do this kind of thing.

  • 12:13:35

    MR. ALEXIS MADRIGALThere's one called, you know, If This Then That. And it basically just creates this tiny little bot that says like, if it's gonna be more than 70 degrees outside, send me the weather in a particular city. And it's really, you know, I think what we see here is that there's all this information out there on the web, and people are pretty good at processing it. But we're not really good at doing it fast. We're not good at doing it repeatedly. We're not good at doing it in bulk. And what bots allow you to do is to do all those, you know, to use the web really fast to do things -- basically, you use just little machines.

  • 12:14:11

    MR. ALEXIS MADRIGALSimple machines, you can almost think of them, that make your experience of the web richer or more fun or easier.

  • 12:14:19

    NNAMDIWell, I gotta tell you, the less than 40 degree bot has been pretty busy here in Washington, D.C. these past few weeks, but Alexis, as entertaining as bots can be, they can also be used for dishonest, even malicious activity. Can you walk us through how these things can get unethical and even illegal pretty quickly?

  • 12:14:38

    MADRIGALYeah. I mean, if you think about the futures of a bot, right, that it can just do something over and over and over and over and over, and it doesn't care. That it can take information from one part of the web and put it on another part of the web, you can already see some of the problems. For example, let's say you wanted to create 100,000 new email accounts and use those email accounts to send spam to people. You could do that really easily. There's, in fact, there's even little plug-ins for UBot Studio that lets you generate names of people, like automatically, like from lists that exist.

  • 12:15:17

    MADRIGALSo, you don't even, you know, you're creating all these different personas. And I think, you know, on the even worse side of things, I wrote a story last year about a San Diego high schooler, who, one day, woke up to having tens of thousands of Twitter bots all following her. And they all had the same basic configuration, which is that they had a funny name, they were from a US city of any size. So, like, they're from Billings or whatever. And then they had a very close to pornographic photo. So, you had suddenly ten thousand bots with close to pornographic photos, all following a 16-year-old in San Diego.

  • 12:16:00

    MADRIGALAnd because the bot spread via the social networks that people had, these bots started following like, basically an entire high school in San Diego. With like, you know, very provocative photos of, I'm sure, things that a lot of those kids have seen, but also not exactly what you would want to have happen. And all that they were trying to do, eventually, was build out these huge networks and then start Tweeting links to a acting/casting site. And that casting site has come under a lot of scrutiny for possible fraudulent behavior.

  • 12:16:36

    MADRIGALSo, it was basically just a huge marketing scheme that went a little bit awry. And ended up, you know, infiltrating, at a very high level, this San Diego high school.

  • 12:16:47

    NNAMDIPorny spam bots descending on high school teenager's Twitter account, and Alexis says these are photos that these teens may have already seen in another capacity someplace. I doubt that, seriously. Alexis, what is Twitter doing to crack down on this kind of bot invasion of its service?

  • 12:17:05

    MADRIGALWell, you know, they try and do things all the time, right? I mean, it's a little bit like Google and people who are doing things to try and gain the search engine. You know, Twitter is always looking for behavior that doesn't look human, so they're sort of scanning their network, looking like, trying to find patterns and say like, oh, OK, these particular set of accounts don't appear to be acting like humans. They appear to be acting like robots. And so there's a bunch of things that you could look at.

  • 12:17:32

    MADRIGALYou know, are they -- is a whole bunch of accounts created at the same time that all only follow, say, five people? Or that all follow the same types of people? Are a bunch of different bots only tweeting like something to the exact same site? So, you know, are they all sending out the exact same advertisement, essentially? And what Twitter does is, you know, and what happened, in this case, with the San Diego high schoolers, is about 12 hours after the porny spam bots first descended, Twitter started to go through and wipe out those accounts.

  • 12:18:03

    MADRIGALAnd they sort of -- they basically reconstructed the network of these spam bots and then sort of swatted them, kind of, one by one. And so, as time went on, you could see those number of accounts being suspended and then eventually deleted. I mean, the problem is that as people, you know, not Darius, but the evil version of Darius, you know, they get smarter and smarter, and the bots look more and more human, and it gets harder and harder for Twitter to detect them. And, you know, so far, what we really see are the people who are bad at making bots, right?

  • 12:18:34

    MADRIGALWe see the amateurs. And what we know is that people who are really good at making bots are much -- they're much less likely to come up for us. We're not really gonna see them as a bot, because they're close enough to acting as human.

  • 12:18:49

    NNAMDIDarius Kazemi has an evil twin. Who knew? Alexis, computer generated visitors to websites must wreak havoc with legitimate advertisers, because they rely on web analytics to decide where to place their ads. How are bots affecting them? Are advertisers able to sort through which sites have genuine eyeballs and which ones are populated by bots?

  • 12:19:09

    MADRIGALWell, it's definitely a new business to try and tell you how many of your visitors and how many people who saw your ads are human, which is -- that is something that would make no sense to someone 10 years ago. It's -- it really depends. The really basic version of, essentially, fraudulent traffic creation, is really easy to detect, and that's because all of the traffic would be coming from the same address on the internet. All of it would have, you know, there's this kind of hand shake that websites do that basically say hi, I'm, you know, an Internet Explorer 7 browser running on Windows, and the website just sort of can take that information in.

  • 12:19:53

    MADRIGALAnd so, fraudulent traffic, at the sort of basic end of the programming spectrum, would look sort of -- all the traffic would kind of look the same. It's -- everyone is coming from Internet Explorer. Everyone is coming from, like, the same IP address. The real question is who has the incentive to try and root out this kind of fraudulent traffic? Some publishers, particularly on the shady side of the spectrum, buy a lot of their traffic. And, essentially, it's an arbitrage opportunity. They say, I can buy the traffic for, you know, a cent per unique visitor, and I can sell that traffic for two cents per unique visitor. And as long as no one asks too many questions, they can make a cent per unique visitor, and it's a money printing machine, right?

  • 12:20:39

    NNAMDIYep.

  • 12:20:39

    MADRIGALAnd I think that the people who had the most incentive to fight this sort of thing are, of course, the premium publishers. That is to say what we would have previously called old media, plus places like, you know, Vice and Buzzfeed who are trying to sell ads for a lot of money. And I think that is -- that's the real -- I think they are detectable and it's a matter of how closely people really want to look.

  • 12:21:08

    NNAMDIWe got an email from Will in Adelphi, who says, or asks, are there any bots out there capable of writing software, given desired functionality? Any bots capable of writing software for another new kind of bot? Darius Kazemi?

  • 12:21:22

    KAZEMICertainly there are. There's a whole field of programming called meta programming, which is writing programs that write programs. And whether or not you would consider these things bots, I think, is kind of a fuzzy matter. But there are certainly automated programs -- I mean, for example, the UBot Studio that Alexis mentioned, is an example of a program that assists in the creation of these bots. Now, whether or not there are bots out there that simply create more bots, I mean, that's actually what you see a lot of the time.

  • 12:21:56

    KAZEMILike the example that Alexis mentioned with the high school student who got inundated with these bots, those bots are essentially self-replicating. There's a master program that creates more and more of these bots over time to sort of create the illusion of a network of real people. One of the easiest ways to tell if a Twitter bot is a bot is if it has -- if it follows a lot of people and nobody follows it back. So, an easy thing to do is to just create a new account and create ten other new accounts and have them all follow each other. And that way, it looks like a group of friends has joined Twitter at the same time.

  • 12:22:34

    KAZEMIAnd so that's all an example of bots that spawn off other bots. Now, whether or not there's a program that is creative about that sort of thing -- I know that there are artificial intelligence laboratories that are working on problems in this domain. I haven't seen any yet, but also, I'd be really excited if there were some.

  • 12:22:56

    NNAMDIThat's scary Terminator-like stuff. We're gonna take a short break. When we come back, we'll continue our conversation about bots on the web. But if you have questions or comments, you can call us at 800-433-8850. Have you ever interacted with a computer program that seemed human online? What was your response? Give us a call. 800-433-8850. Send email to kojo@wamu.org. Shoot us a tweet @kojoshow or go to our website, kojoshow.org. Ask a question or make a comment there. I'm Kojo Nnamdi.

  • 12:25:06

    NNAMDIIt's "Tech Tuesday." We're talking about bots on the web with Darius Kazemi. He is a Computer Programmer, who's also worked as a video game designer. Alexis Madrigal is Senior Editor at The Atlantic and author of "Powering the Dream: The History and Promise of Green Technology." And joining us now by phone, from Seattle, is Ryan Calo, Assistant Professor of Law at the University of Washington. He specializes in Cyber Law and Robotics. Ryan Calo, thank you for joining us.

  • 12:25:31

    MR. RYAN CALOThanks for having me.

  • 12:25:32

    NNAMDIRyan, when we start talking about programs that can impersonate human beings, sweep up information and even control your computer, all kinds of legal red flags start waving. How do you sort this kind of activity out legally?

  • 12:25:47

    CALOWell, what's so fascinating about this discussion is that you can have a relatively simple program, that if it acts upon the world, it can raise really complicated and interesting legal questions. Just one example might be, you know, Darius has this, this intervention, this experiment. It's called the Amazon Random Shopper. Maybe he can tell us a little about it.

  • 12:26:05

    NNAMDII'd love if he could tell us more about it later. But go ahead.

  • 12:26:07

    CALOGood. So, the Amazon Random Shopper just buys Darius random things on the internet, but in imagine that this bot buys something for Darius that's illegal. Illegal, maybe, in his state, right? So, he's in Massachusetts, so it buys him like, you know, candy with more than one percent alcohol content or something illegal in Massachusetts. You know, how should the law think about whether Darius is culpable for that? And would it be possible for Darius to build a bot in such a way that it never acted in this unanticipated, and really unlawful, if done by a human, way?

  • 12:26:40

    NNAMDIAs I said, we'll talk more about that -- about Darius's Amazon Random Shopper bot later, but can deploying bots on the web result in some unintended consequences? Let's talk about that danger and some examples we've seen, Ryan.

  • 12:26:55

    CALOSure. Sure. Well, I mean, so let's sort of stick to a moment to these bots that are on Twitter that we've been talking about so much. So, Stephen Colbert has this hilarious bot that's called Real Human Praise @realhumanpraise. And what it does is it combines, you know, Fox News shows and anchors, basically, with movie reviews, I think, from Rotten Tomatoes. Right? And so, imagine, and if you're in on the joke, like, it's hilarious. You get it. You see what's going on. But if you're not, you're just not sure what to make of it, right?

  • 12:27:23

    CALOSo, occasionally, this bot will like refer to Sarah Palin as quote a party girl for the ages. Or has claimed that she's been wandering the night time streets, trying to find her lover. And you wonder at what point, you know, maybe not this particular bot, but wonder at what point these bots can actually generate speech that is defamatory. And if an ordinary audience were to read it, they would understand it to be defamatory and it would have real harm for the subject. But, that wasn't intended in the right ways, or in the ways that the law cares about by the person who actually wrote the underlying bot.

  • 12:27:53

    CALOAnd so, it's those sorts of situations where something, you know, unanticipated, something we might call emergent, occurs. That's where the law starts to have some problems.

  • 12:28:03

    NNAMDIOn to the telephones. Here is George in Washington, D.C. George, you're on the air. Go ahead, please.

  • 12:28:09

    GEORGEHi Kojo. And thank your staff for putting me through quickly. I don't have a lot of battery time. But here's the thing that I have. I've been putting -- I put together some (word?) websites, which are really cool, and they're mine, and I had a forum module attached to that. And a forum module allows people to come and join my forum. And it was my goal to have something that took off like Facebook or what have you. And what I'm finding is that these other computers are asking to have accounts on my website. And you know it's not a person because it's like Julie Nmzxy, you know, and the string's getting longer and longer.

  • 12:28:49

    GEORGEAnd I'm constantly bombarded by this, so it makes me wanna turn off the module. Eventually, it wears me down where I can't keep track of all these people applying for accounts, and they're not really people. They're -- I'm guessing they're bots. And I'm wondering, why does this happen, and what is the end goal of these people?

  • 12:29:06

    NNAMDIAny idea, Alexis Madrigal?

  • 12:29:09

    MADRIGALYeah. I mean, I think the reason it happens is the cost is so low. You know, it's easy to write this code, doesn't take a lot of computing power and there is -- you know, in some cases, the goals of these people are all different. They're not very good goals, though. You know, in the old days, it used to be that you wanted to have links on a bunch of different websites so that Google would see your website as being a higher ranking site. And I think that has been the primary goal of a lot of these people, and that's why, when you go look at spam comments on various sites or, you know, spam, you see these sort of like random strings of text and then like links.

  • 12:29:50

    MADRIGALIt's just like sort of both marketing spam also a way of sending a signal to Google that whatever is being linked to is, in fact, higher quality.

  • 12:30:01

    NNAMDIGeorge, does that answer your question?

  • 12:30:02

    GEORGEBut how do you stop that? I mean, is there a way to stop that, because a guy like me, that has minimal resources, what do I do? Is there ways to combat something like that?

  • 12:30:12

    MADRIGALLet's ask Darius.

  • 12:30:13

    NNAMDIDarius Kazemi, any ways George can stop that?

  • 12:30:16

    KAZEMIYeah, it's sort of -- there's sort of an escalation happening, where the more you try to -- the more you figure out a way to combat the bots, the bot makers will just come up with a way around it. And so, it's really unfortunate that people have to put up with this kind of thing when there are people out there making malicious bots. Captcha is, you know, where you have to fill in, kind of, a bunch of letters or recognize something from a picture before you're allowed to post or create an account somewhere.

  • 12:30:45

    KAZEMIThat's a kind of a classic way of putting a problem that is probably hard for a computer to solve somewhere to prevent the computer from creating an account. There are, of course, bots that can get past Captcha. There are ways that you can, sort of, you could use something along the lines of Mechanical Turk and essentially say, oh, here are a bunch of images. You know, fill these out as fast as you can and then you can go back through and solve these Captchas for people, which is, again, is bad news for you as the forum moderator. So, you know, it's hard.

  • 12:31:25

    KAZEMIIf Captcha is something that you can put in there and you haven't done, I would recommend that. Some people put a very simple question in like, you know, what's, you know, two plus 10 equal? Right? But they write it out or maybe they obfuscate the text somehow. Or whatever. Just a simple question to sort of let people know that there's a human on the other end of the line. If Captcha is something that you have installed and it's not working, then you have to start getting creative with that sort of thing, and I don't -- unfortunately, I don't know all the answers for this. So, best of luck with that.

  • 12:32:06

    NNAMDIIndeed. Good luck to you, George, and thank you very much for calling. Ryan, is there, or should there be legal immunity for bot makers if their bots were open source and made with no mal-intent?

  • 12:32:19

    CALOWell, the problem that the caller indentified is an old one, and technically speaking, there is some legal recourse you might have. The problem, of course, is that it's very difficult to catch these people and it's very difficult to figure out where they originate. But you're asking a different question about, you know, if I'm bot's studios, right? And my bot ends up getting used for something quite bad. You know, maybe it crashes the stock market, as happened with the high speed trading algorithms a couple of years ago.

  • 12:32:51

    CALOShould I have any liability? I've argued, in law review articles in the past, and most recently in California Law Review, that you shouldn't actually be able to hold people who make platforms responsible for what users do with them, right? It shouldn't be your responsibility to make your tools in such a way that they can't be used for ill. Rather, it should fall on the individual user, who uses your tool for the wrong thing, there to be liability. I think that's less of an issue when it comes to software, because the injury is not physical. I think it's much more important when you're talking about like an open robotic platform.

  • 12:33:27

    CALOLike a robot that you might buy designed to do anything that someone uses for the wrong reason, and then someone gets hurt as a consequence. You don't want it to be possible to sue the robot manufacturer. After all, with the internet, what would be Facebook's incentive to make an open platform for communication if they could sued, literally, every time someone said something wrong on Facebook? And, in fact, they can't, because federal law immunizes Facebook for what users do, and I think that that same principle should apply when you're making tools like bots.

  • 12:34:01

    NNAMDI800-433-8850.

  • 12:34:02

    MADRIGALKojo, can I hop in there for one second?

  • 12:34:03

    NNAMDIOh please, go right ahead.

  • 12:34:05

    MADRIGALAs a UBot Studio user, one thing that I have been surprised by, because I think, in theory, I completely agree with Ryan. But one thing that I've been really surprised by is how UBot Studio builds in ways of getting around ethical norms. So, like, it, in fact, does have plug-ins for creating fake account user names, right? It has plug-ins for farming things out to solve Captchas, which is -- creates, or at least adds to the problem that we were discussing before. So, it's difficult because assigning -- I think you're right, Ryan, that like assigning culpability to the platform maker is very difficult.

  • 12:34:48

    MADRIGALBut when they're going out of their way, in order to enable poor behavior on the part of users, it's really tough to say that none of the culpability, maybe not the legal culpability, but none of the -- the guilt should fall, in part, to UBot Studio, I think.

  • 12:35:08

    NNAMDIWhat do you say, Ryan?

  • 12:35:09

    CALOWell, I think Alexis, as always, raises a very interesting point. I think the problem is this, right? So, for every use case that is problematic, we can come up with one that is beneficial. I'm conducting a study, and I'm trying to test out my health database, and so I need to generate a bunch of random names for that purpose. If it can be shown that the only purpose of the particular module is to circumvent an existing restriction, or to do something unethical, well then, yeah, I can imagine -- and we see an analogue there with copyright.

  • 12:35:43

    CALOSo, copyrights, in general, is not gonna hold a platform like Google responsible for the copyright violations that occur on its platform, absent, you know, notice and take down. In other words, being, you know, being told that it has some kind of copyright violation on its site. That said, there have been situations where there's been secondary infringement where a platform, like a peer to peer sharing service was mostly used, almost entirely used, let's say, to violate copyright. And then, at that point, you started to see the prospect of secondary liability.

  • 12:36:17

    CALOSo, I don't disagree with Alexis. I just think that line is so difficult to draw between, you know, what is a use that could be beneficial on some theory, versus the only purpose of it is to do harm.

  • 12:36:29

    NNAMDIBack to the telephones. Here is Jared in Washington, D.C. Jared, you're on the air. Go ahead, please.

  • 12:36:35

    JAREDHi Kojo. Thanks for taking my call. I work in the government in crowd source prizes and challenges. And one of my questions is as artificial intelligence progresses and bot coding becomes more sophisticated, what's the possibility of using bots to solve crowd source problems like the satellite data that was used, you know, that was open for crowd sourcing for the missing Malaysia flight? Or even polling for opinions on a certain topic. So, I'd be interested in hearing...

  • 12:37:05

    NNAMDIDarius Kazemi, what do you say?

  • 12:37:07

    KAZEMIWell, that's an interesting prospect. Bots are typically, you know, these small autonomous programs, and there are things called botnets out there that do actually, in a sense, crowd source these problems. For example, a lot of malware, like worms that end up on your system, essentially what they do is they dedicate a portion of your processing power -- this is kind of why they slow down your machine -- dedicate a portion of your processing power and essentially crowd source your computer and a bunch of other computers to sort of put a lot of processing power toward a particular problem.

  • 12:37:48

    KAZEMILike, for example, creating spam or that kind of thing. So, there are already botnets out there that do this sort of thing. In a legitimate sense, of course, there are plenty of distributed computing platforms out there, as well. I think that the key thing to understand here is that, typically speaking, these bots are not artificial intelligence, in the sense that you might expect from a movie like "Terminator," or what have you, where you have these individuals with -- who are nearly human and that sort of thing.

  • 12:38:26

    KAZEMIMaybe one day we'll get there, but these are very simple programs that essentially, for example, might just go and -- if you want to pretend to be a regular person, you'll just go and search Twitter and find an innocuous tweet, and then tweet that. There's not even any natural language processing happening. So, it's not crowd sourcing in the sense that we would think of it, where we're harnessing intelligence, But it's more like a distributed computing problem.

  • 12:38:55

    NNAMDIWell, Darius, you've got us -- oh, go right ahead, please, Alexis.

  • 12:39:00

    MADRIGALOh, I was just gonna say, yeah, there's -- sure. There's the -- on the polling opinions, there are, in fact, telemarketing bots that try and do such things. I think the real problem -- I mean, there's -- they both get into legal trouble sometimes. And people, as it turns out, don't actually interacting with robots, or at least the current generation of robots, most of the time, as anyone who's ever interacted with like one of those phone systems, you know, like, you're just like sitting there pressing zero to get to an operator. You know? It's a similar experience, but it's being pushed to you, so it's even worse.

  • 12:39:33

    NNAMDIDarius...

  • 12:39:35

    KAZEMIWhich is why I love -- oh, I'm sorry. I just wanted to say, I like making bots that aren't opt in experience, so that people can just choose to engage rather than it being forced on them.

  • 12:39:42

    NNAMDIWe're gonna talk about your bots in a second. We're gonna take a short break, and when we come back, we'll continue this "Tech Tuesday" conversation on bots on the web. Inviting your calls at 800-433-8850. Do you have a favorite bot online? What does it do? What kind of dangers do you think online bots or real robots pose for the law? 800-433-8850 or shoot us an email to kojo@wamu.org. I'm Kojo Nnamdi.

  • 12:41:57

    NNAMDIWelcome back. It's Tech Tuesday. We're discussing bots on the Web with Ryan Calo. He's a professor of law at the University of Washington. He specializes in cyber law and robotics. Alexis Madrigal is senior editor at The Atlantic and author of "Powering the Dream: The History and Promise of Green Technology."

  • 12:42:14

    NNAMDIAnd Darius Kazemi is a computer programmer who has also worked as a video game designer. Darius, you've got a slew of bots that you've let loose on the Web, including several on Twitter. They seem to be pretty harmless. Why do you make these things? And tell us a little bit about what they do.

  • 12:42:30

    KAZEMIWell, I make these things because they make me laugh primarily. That's always the initial motivation for me. But, you know, I make bots because I think, for example, on Twitter, it's just a very interesting platform for writing in general but also for computer generated writing because it's a place where it has a lot of interesting affordances. You're only supposed to generate these small text snippets, right, so that's an interesting limitation. It's got a lot of nice things built in to it on Twitter.

  • 12:43:06

    KAZEMIFor example, you know, if people don't want to see one of my bots, they can just hit the block button, and they'll never see it again. I actually really like that sort of thing because it allows me to create things without worrying too much that I'm going to annoy other people. But, you know, they -- so -- and very often, I use Twitter to -- and bots on Twitter to critique the way that people behave on Twitter.

  • 12:43:33

    KAZEMIFor example, I have a bot called Am I Right? bot, and it tells a silly Am I Right style jokes. For example, it usually just does a simple rhyme scheme. So, for example, Black Friday, more like Whack Friday, am I right? And...

  • 12:43:52

    NNAMDISpring Break. Spring Break, more like Spring Ache, am I right?

  • 12:43:56

    KAZEMIYeah, exactly. And so it's pulling things from Twitter trending topics, and it attempts to tell a joke about it. And usually these jokes fall flat, but, you know, part of what I'm trying to show with it is that these jokes often fall flat. And it's sort of a bankrupt form of comedy a lot of the time, and there's more to it than just following the formula. I have another one called Two Headlines, which takes two headlines of different topics and mixing them together. There was one recently. Microsoft about to discontinue Mila Kunis. How can you upgrade? You know, that sort of thing.

  • 12:44:31

    NNAMDIThere's another one. Google Glass led rally, falls short as KU loses to West Virginia, 92-86. But I...

  • 12:44:37

    KAZEMIYeah. There was one this morning about nuclear weapons versus the Chicago Bulls, you know.

  • 12:44:43

    NNAMDII'd like to talk, however, about your Amazon random shopper bot that shed some light on the algorithms that Amazon uses to recommend stuff to me, to us, when we shop there. How did you come up with the idea of the random shopper? And how does it work?

  • 12:44:59

    KAZEMIWell, with random shopper, as usual, I was trying to amuse myself. I thought of -- I was thinking about the feeling that you get when you receive -- when you place an order on Amazon and it takes maybe six months to fulfill because it's on backorder or something, and then you just get that item in the mail. And it's like a gift that was given to you by your past self. And so I thought, oh, well, that's kind of cool just having these things show up out of nowhere.

  • 12:45:26

    KAZEMIAnd so my original concept for it was it was more like an app. And the idea was that there would be a randomness slider. And on one end, there would be no randomness, and it would just buy things off of your wish list for you. And on the other end, it would be complete randomness. And in the middle, it would sort of traverse Amazon's recommendation system and find things that you might like in the middle.

  • 12:45:50

    KAZEMIAnd -- but the first prototype that I built was the purely random one because I thought it would be more amusing to me than just looking at my wish list. And in the process of building that, I said, oh, well, this is really cool. I think this project is done, and I don't have to take it any farther than this. And so what it does is it pulls a random dictionary word out of a hat essentially. This is actually using a separate service from Amazon called Wordnik. And it grabs a random dictionary word.

  • 12:46:23

    KAZEMIAnd then it goes to Amazon. And it actually -- more so than most of my bots, this bot actually pretends to be a person. It loads up a Web browser, and it literally clicks on things in different places because Amazon doesn't allow you to purchase things automatically through them. They want you to go through their website, you know, as makes sense. So it's literally clicking on different elements and going through the buying process. But the first thing it does is it searches for this random word, and then it filters out everything except -- it only looks at DVDs, books, and CDs. And...

  • 12:47:03

    NNAMDIWell, what did you receive from your Amazon random shopper bot? And how did Amazon try to pigeonhole you, to characterize you from your purchases?

  • 12:47:14

    KAZEMIWell, the first two things that I got from it were a Noam Chomsky book, which was kind of funny considering that he's done a lot of work in artificial intelligence and language, and the second thing that it sent me was a CD of avant garde electro-acoustic Hungarian music from the 1970s and '80s, which was really cool. And then it just started -- you know, as it sort of bought more stuff, Amazon started building up a profile. And it ended up thinking that the bot was a, I think, a Presbyterian sci-fi film lover who also reads philosophy on the side.

  • 12:47:57

    NNAMDIAnd that's not who you are?

  • 12:48:01

    KAZEMIIt's not who I am, but it's -- it was very interesting to see it build this profile out from scratch.

  • 12:48:07

    NNAMDIAlexis, you talked earlier about what a bot did with Netflix. And you used that in a recent story. It shed a lot of light on how Netflix comes up with the specialized categories of movies that it recommends to us when we use the service. Tell us about that project a little more and what you learned about Netflix using bots, as your reporting said.

  • 12:48:28

    MADRIGALYeah, sure. I mean, and I'll say, you know, just in general, before I talk about Netflix, what I love about Darius' toys is that they actually -- their silliness reveals -- it's, like, sort of the backside of the utilities that people are trying to build around these bots. Like, I think, in the future, it's not difficult to imagine that we'll all have a little swarm of bots that goes and does things for us.

  • 12:48:51

    MADRIGALIn fact, this is the vision for, like, Siri and Google with Google Now. But what' so great is that, while it's sometimes difficult to see that potential in the actual products of Google Now and Siri, you actually can see the potential when you hear Darius talk about it and you see how his toys work.

  • 12:49:07

    NNAMDIYep.

  • 12:49:07

    MADRIGALBut anyway, so when it comes to my Netflix project, basically what I wanted to do was figure out, like, what are Netflix's favorite words? You know, because it seems like Netflix is always recommending, you know, a romantic thing here, but it also has -- you know, it has, like, quirky. It has creepy. It has zany. It has all these different ways of talking about movies. And so I wanted to know, how does sort of Netflix, the computer system, talk about movies?

  • 12:49:37

    MADRIGALAnd so what I did was, as I mentioned earlier, I scraped all of the categories that Netflix had. And then I put them into another piece of software that sort of let me treat it just as a big corpus of text. And then I broke down the most popular phrases that Netflix uses, like -- so, for example, we could say -- from the 1980s is actually Netflix' favorite decade. And then it uses from the 1970s, then from the 1990s. And we were able to sort of break up each of these individual genres into its constituent parts.

  • 12:50:13

    MADRIGALAnd so what you end up with is essentially a kind of -- all of Netflix's vocabulary. And when we had Netflix's vocabulary in hand and we had its sort of relative frequency of how often it used this or that and we had the kind of syntax that Netflix would use to describe a movie -- so that might be, you know, zany werewolf movies set in Europe during the 1980s. We could actually put together these different pieces into new genres.

  • 12:50:46

    MADRIGALAnd, of course, these new genres don't really define any particular movie, but they're pretty funny. And so what we did was we actually built our own little bot that sat on the Internet, and people could generate their own Netflix genres. And part of what we were trying to do with that -- we allowed people to have three different options. They could generate kind of a Hollywood-style, which would sort of be romantic thriller, or they could generate Netflix-style, which would be much more specific, so it'd be sort of a Canadian romantic thriller set in the 1990s -- maybe there is such a thing.

  • 12:51:23

    MADRIGALOr they could generate what we called sort of the Gonzo, which sort of used as much of Netflix's vocabulary as possible. And part of what we were trying to do there is just reveal the way that the choices that Netflix has made around what one of their genres looks like actually shapes the movie-watching experience. Like, they could do lots of different things, but they've chosen to do one particular thing. And that's how software ends up limiting the choices that we make and also helping us find the things that we like.

  • 12:51:53

    NNAMDIYou came up with nearly 80,000 ways that Netflix could describe movies. What can we learn about Netflix and how it's combining human intelligence and machine intelligence to get us to watch movies?

  • 12:52:06

    MADRIGALYeah, I mean, I think the dominant lesson that I took from it is that Netflix couldn't do the kind of analysis that it does using software alone. What they actually have people do is watch movies and television programs, and then they break down those programs into their -- into what they used to call quanta into their actual constituent parts. So they rate, for example, every single movie character based on their sort of ethical goodness.

  • 12:52:38

    MADRIGALThey rate every movie based on how romantic it is. They rate every movie based on how much sex there is in it. They rate every movie based on how much drug use there is in it. And they take all of those different categories and use them -- they use those tags, as they call them, to come up with the genres that you then see.

  • 12:52:57

    NNAMDIYou know, what can we take away from the work that Darius' bots does or the work your bots did on Netflix? Is there any way for regular people to prevent ourselves from being profiled, labeled, if you will, by these companies?

  • 12:53:16

    MADRIGALI think the -- you know, we're talking about Darius' bot, the random shopper bot.

  • 12:53:20

    NNAMDIYep.

  • 12:53:20

    MADRIGALI mean, introducing just a little bit of noise into the system throws off the profiling of most of these services. And I think, you know, right now, these problems are maybe less pressing because, in all honesty, like, most of the profiles that are developed around us are imperfect. And everyone understands that they're imperfect.

  • 12:53:39

    MADRIGALI think what scares me -- and this is maybe more something that Ryan can get into -- is, like, when these things -- when the profiles get better and when those profiles enter the real world through these robotics applications, it feels to me like the stakes get raised in those cases.

  • 12:53:56

    NNAMDIIndeed. Ryan Calo, I'm interested in the uncharted legal territory that come with programs that act like humans. Can you talk about, oh, the privacy implications of having these programs running around the Web?

  • 12:54:08

    CALOSure. I mean, I will say that, you know, it's interesting to look to the European example where their privacy laws essentially -- one of the interesting facets is that you actually have a right not to have a decision made about you by a bot, essentially. There is a part of their EU privacy director -- the data directive there that says, you know, if a company can't make a decision about you autonomously, or it at least has to give you the option to have a human being make that decision.

  • 12:54:36

    CALOBecause I actually disagree a little bit with Alexis here. I think that lots of decisions are getting made about us that are basically generated by algorithms, everything from, you know, whether someone should get paroled in some instance, whether you should be able to get on a plane. I think there are lots of senses in which, you know, our lives are managed in ways we don't even fully appreciate by algorithm, maybe not by the bots that Darius or Alexis make, of course.

  • 12:55:05

    CALOBut, you know, by sort of deeper application of algorithms, you know, in that regard, I would look to Danielle Keats Citron's work, technological due process. Privacy, though, you asked about. So one of the really interesting things about bots is that they can be very efficient at collecting people's information.

  • 12:55:25

    CALOAnd work by folks like B.J. Fogg at Stanford, you know, suggests that bots can leverage many of the same tricks as people, especially if they're anthropomorphic. So, for instance, you know, we tend to answer questions more often if it's reciprocal. And so a computer can say, I was made in 2007. When you were born? And we're more likely to answer that question.

  • 12:55:45

    NNAMDIMm hmm.

  • 12:55:45

    CALOSo they can leverage some of these same techniques, but also, of course, they're tireless, they have perfect memories, and they can change their identity at will. And so something that folks have been worried about -- people like Ian Kerr over in Ottawa -- you know, worried about the fact that bots can very efficiently gather tremendous amounts of information about us passively and actively.

  • 12:56:07

    CALOAnd that's exactly why we're seeing so much of the technology and why I expect to see so much more. The idea is that, you know, automation can do things at speed and scale. That's even true in the theater of war. I mean, Peter Singer was writing about this a couple years ago, about why we should expect more automation in military context because they're faster than people.

  • 12:56:25

    CALOAnd that gives an edge, and that's not something that our government is going to give up in that context. Similarly here, you know, these bots can give you an edge at one level. And I worry about when we start to perfect those systems. And I worry about what's in place to some extent today.

  • 12:56:40

    NNAMDIDarius, has adjusting bots to take nuances like fair use into account an easy thing to do?

  • 12:56:47

    KAZEMII wouldn't say it's easy, no. I actually try to do this myself. I have a whole kind of ethical code that I try to keep in mind whenever I make a bot. So, you know, I design a -- usually, I end up designing them such that they're not going to steal things from people, except -- you know, except where it may be fairly clear to me. I'm no legal expert.

  • 12:57:15

    KAZEMIBut when my bot grabs a five-word phrase from somebody's public tweet and uses it out of context, I feel like that's different than, say, grabbing an entire song. I have audio-visual bots that put music to things, and I make sure that they grab Creative Commons-licensed music. And if attribution is required, I provide attribution and that kind of thing. But that's often the hardest part of making these things is doing it.

  • 12:57:41

    NNAMDIOnly got about 30 seconds left. Darius Kazemi is a computer programmer who's also worked as a video game designer. Sorry to have to cut you off, Darius. But thank you so much for joining us.

  • 12:57:51

    KAZEMIThank you.

  • 12:57:52

    NNAMDIAlexis Madrigal is senior editor at The Atlantic and author of "Powering the Dream: The History and Promise of Green Technology." Alexis, thank you for joining us.

  • 12:58:00

    MADRIGALThank you very much.

  • 12:58:01

    NNAMDIAnd Ryan Calo is a professor of law at the University of Washington. He specializes in cyber law and robotics. Ryan Calo, thank you for joining us.

  • 12:58:08

    CALOThanks for having me.

  • 12:58:09

    NNAMDIAnd thank you all for listening. I'm Kojo Nnamdi.

Related Links

Topics + Tags

Most Recent Shows