Season 4 Episode 5 Mar 10, 2021

Online Abuse and the Future of Anti-Harassment Tooling

Pitch

The tech industry could be doing a whole lot better when it comes to anti-harassment

Description

In this episode, we talk about online abuse and anti-harassment tools with Tracy Chou, CEO of Block Party, a company building tools to manage online safety and harassment, and Chloe Condon, senior cloud advocate at Microsoft.

Hosts

Ben Halpern

Forem - Co-founder

Ben Halpern is co-founder and webmaster of DEV/Forem.

Jess Lee

Forem - Co-founder

Jess Lee is co-founder of DEV.

Guests

Tracy Chou

Block Party - Founder and CEO

Tracy Chou is an entrepreneur and software engineer, known for her work advocating for diversity and inclusion in tech. She is currently the founder and CEO of Block Party, which builds consumer tools for online safety and anti-harassment.

Chloe Condon

Microsoft - Senior Cloud Advocate

Former musical theatre actress and Hackbright Academy graduate, Chloe is now a Senior Cloud Developer Advocate at Microsoft. Pre-Hackbright, she spent her nights and weekends performing in the Bay Area as a singer/actress and worked in tech by day in various non-engineering roles. Her article "How to Be a Woman at a Tech Conference" was one of the Top 20 Most Recommended Articles on Medium on 7/29/2017: http://bit.ly/2uUDeky

Show Notes

Audio file size

81520978

Duration

00:56:37

Transcript

[MUSIC BREAK]

 

[AD]

 

[00:00:01] BH: A common scene in technology companies everywhere, big conference table with the CTO on one end, developer teams on the other, the showdown. We have an idea, “Will it get funded?” More companies are feeling the pressure to go faster and stay ahead of the competition. Projects that have long timelines or no immediate impact are hard to justify. DataStax is sponsoring a contest with real projects, real money, and real CTOs. If you have a Kubernetes project that needs a database, the winner will get funded with a free year of DataStax Astra. Follow the link in the podcast description to submit your project. It’s time to impress the CTO and get your project funded.

 

[00:00:41] Eyes glaze over from debugging a remote Kubernetes service? Instead, run your service locally in your favorite debugger and instantly find the problem. Ambassador Telepresence is the easiest way to debug microservices on Kubernetes. Spend more time fixing problems instead of reproducing them. Ambassador Telepresence is free to use for teams with unlimited developers. Get started today at getambassador.io/devdiscuss.

 

[00:01:07] Educative.io is a hands-on learning platform for software developers. Learn anything from Rust to system design without the hassle of set up or videos. Text-based courses let you easily skim back and forth like a book while cloud-based developer environments let you get your hands dirty without fiddling with an IDE. Take your skills to the next level. Visit educative.io/devdiscuss today to get a free preview and 10% off on annual subscription.

 

[00:01:35] Get ready to level up at New Relic’s virtual event, FutureStack 2021, on May 25th through the 27th. Join your fellow data nerds from around the world to learn, inspire, and rack up experience in 50 interactive sessions, 12 hands-on labs, and a 24-hour hackathon. FutureStack is your cheat code for observability. Engineers from across the industry will lead you through topics like Kubernetes, DevOps strategies and observability. Then join us to relax with some Minecraft on Nerd Island. Registration is free at futurestack.com. Game on!

 

[AD ENDS]

 

[00:02:17] TC: One other thing that’s really problematic with the way most platforms treat abuse is that the burden of dealing with it is fully on the person receiving it. So Reddit’s response to me when I got harassed by thousands of trolls was like, “You can go report them.”

 

[00:02:41] BH: Welcome to DevDiscuss, the show where we cover the burning topics that impact all of our lives as developers. I’m Ben Halpern, a co-founder of Forem.

 

[00:02:49] JL: And I’m Jess Lee, also a co-founder of Forem. Today, we’re talking about anti-harassment tools with Tracy Chou, CEO of Block Party, and Chloe Condon, Senior Cloud Advocate at Microsoft. Thank you both so much for joining us.

 

[00:03:00] CC: Thanks for having me.

 

[00:03:01] TC: Yeah. We’re excited to be here.

 

[00:03:03] BH: You both have a rich and interesting background in the industry. And I feel like knowing something about both of you, I feel like you have differing stories with lots to tell. So why don’t we start with Tracy? Can you give us a little bit about your background?

 

[00:03:15] TC: I have been a Silicon Valley person for a long time. So I feel very immersed in that sort of culture and also very regretful about it. As a software engineer at Quora and at Pinterest, pretty early for both of them, I joined Quora as the second engineer on the team. At Pinterest when I joined, it was about 10 people. So I got to see really early stage a couple of companies that were building these social platforms and they forgot to participate quite a bit in the design of them, which gave me an interesting lens into how a lack of diversity and inclusion and representation on these teams can lead to areas that get ignored to the detriment of certain people and sometimes all of society. During my time at Pinterest and afterwards, I’ve done a lot of work around diversity and inclusion activism as well, so help to get tech companies to release their diversity data, which has mainly told us that tech companies are really bad at it and have not improved over the last however many years very much. Co-founded a non-profit called Project Include, which works with startups on diversity and inclusion. What’s been kind of interesting about all of this background is part of my D&I activism work was coming from the experience of seeing how the lack of representation on teams led to problems like abuse and harassment. The more I’ve spoken about D&I and built a platform, the more harassment I’ve gotten, and that has now led me to working on a company to try to solve this problem.

 

[00:04:46] JL: Yeah. I can’t wait to learn more about Block Party. But before we move onto that, Chloe, can you give us a bit of your background story?

 

[00:04:52] CC: So I come to tech by way of the stage and screen. I have a non-traditional path here. I spent a majority of my teen and childhood and early to mid to late 20s actually being very, very involved in the theater scene. I grew up in a family of artists, a costume designer, and a director playwright, went to a performing arts high school and was very immersed in the arts my whole life. I ended up getting my bachelors in drama and theater performance from San Francisco State University. And inevitably, working as an actress in the Bay Area, I needed a 9 to 5 Monday through Friday job to help me get by financially while I was performing nights and weekends. So I kind of stumbled into Silicon Valley. I was working in every role except engineering because I truly worked everything from recruiting to office manager. These were the steady 9 to 5 sales customer support. These were the steady 9 to 5 jobs that were in the Bay Area at the time I worked as a virtual assistant. I worked in retail. I did all kinds of stuff. And similarly, I ended up by chance when I was working as an executive assistant attending a talk at Google that was all about getting young women, specifically high school, middle school-aged young girls interested in computer science by adding characters to Disney Channel and Nickelodeon shows. And I was sitting there, a 26-year-old woman, in the audience furiously looking up what Girls Who Code was realizing my ship had sailed because I was about 26 at this time. And this actually was a huge light-bulb moment for me because I came home and I lamented to my boyfriend, “I wish I could have had this when I was younger where it was this. I wish I would have learned how to program.” And it truly took someone saying, “You can still learn,” which of course inevitably led to me trying some classes online and then getting targeted Facebook ads for bootcamps, because those were very popular at the time. And I ended up leaving my office manager job at Juul and Pax Labs, again, very Silicon Valley company, and ended up attending Hackbright in 2016, which is an all-female software engineering bootcamp, where I learned all the basics of how to learn a language and how to code. I literally did not know what STEM was, the first day of my program. I had to Google it. And that has led me to a career in developer relations, which has become a really interesting blend of my skills as a performer and as an actress with being able to educate and instruct, specifically at Microsoft, I work on our academic teams. So I work with a lot of students. So truly a lot of my job involves being a very visible woman online and in STEM. And I think it’s important to do that because the only female engineer that I had to look up to when I was younger that I can think of from Pop Culture was Gadget from Chip 'n Dale: Rescue Rangers, and she is a chipmunk and a cartoon. So I try very, very hard in everything that I do trying to be the representation that I did not see and often do not see in this industry.

 

[00:07:55] BH: And both of you within your roles as visible women in the software industry are absolutely no stranger to harassment. Will you two give us a sense of the scope and gravity of the harassment you’ve received, if you’re comfortable talking a little bit to that?

 

[00:08:15] TC: Basically, as soon as I started working at Quora, question and answer site off a community generated content, you have people interacting with people online and then after it gets bad pretty quickly. So actually the first thing I worked on at Quora was a block button because somebody was bothering me. So even when we only had maybe a few thousand users on Quora, it was already starting to get annoying people, not necessarily harassment, just people popping up and trying to be annoying. It has been a range of the annoying kind of mansplaining reply guide type of stuff. People telling me I need to smile more and be less angry. Too much more targeted harassment, the most severe from those earlier days was someone who was messaging me across probably at least like five or six different platforms sending sexually explicit threats, doing really weird things like downloading all my photos that were public and putting them into a public Facebook photo album and then putting public posts about me on Facebook and paying to promote them, lots of Twitter accounts to harass me. There’ve been a few different, like very dedicated harassers and suckers over the years. Some of them are very proud to have been harassing me for five, six, seven years. They will post about it. That’s like the more egregious cases where I’ve had to go to the police to file because it’s so extended and has sometimes escalated to physical stalking and threats. Very frustrating because law enforcement is extremely misogynistic and will not do anything until something happens, which is kind of too late. Last summer, I had a great incident with online harassment when I went on Reddit, which was very ill-advised to do an AMA, to talk about working on Block Party and anti-harassment software, which then triggered a whole wave of harassment. Something like 4,000 trolls showed up, the little troll brigade. Reddit blamed me for it. So the official Reddit Twitter account told me it was my fault and that they’d not condoned harassment. If there’s anything that I thought was harassing that I could report it, which I was definitely not going to do after the trauma of seeing all these thousands of comments. This landed on 4chan, two different 4chan threads about the Reddit AMA, which then caused even more harassment for a couple more weeks, including a DDoS attack on Block Party, people creating accounts with my name and photo to post racist and misogynistic abuse, which is really disgusting, really, really awful to see your name and face next to really, really awful things. There’s like thousands of things in my email inbox like this. There were posts and comments on Substack. There was an overflow to Twitter. Apart from that, lots of garden-variety sexism and annoying stuff on Twitter. I get a lot out of the Johnny 10-digit accounts with no profile photo, posting really annoying things that seem to be coordinated harassment, particularly on more controversial issues like feminism, shouldn’t be that controversial. Certain keywords will trigger a lot of harassment, China, Taiwan, politics, even very factual things will trigger a lot of harassment. I get most of it on Twitter because I am most public on Twitter, but it does spill over to other platforms. Kind of like anywhere you are available online is the surface area to be attacked. I’ll pause there. Chloe, over to you.

 

[00:11:34] CC: Very similar. The range and the variety of the harassment that I’ve received on so many platforms, it’s a diverse set. Everything from LinkedIn to Reddit, to anonymous feedback forms for conferences that I’ve spoken at. One that really obviously comes to mind that people read about a lot that involves me had to with someone named Tee Medlin who literally copied a picture from an article that I had written before and claimed on his Instagram that I had been following him around the whole conference and I wouldn’t leave him alone. And that of course opened this incredible can of worms of his entire life online was fake including involvement with Seth Rogen, where he claimed that he was at a golf tournament with this man, but the photo was actually of a wax statue. So that is probably one of the more public ones that I’ve dealt with. But behind the scenes, a lot of things are coming into my DMs and that ranges from dick pics to sugar daddy inquiries, to people correcting me on technical things that I’ve stated. I think simply existing as a five foot two white blonde, that I have bright orange Nickelodeon hair, software engineer. I upset people by simply existing often, but I have read essays about myself on Reddit, about how Chloe Condon specifically does not belong in tech. I have, during a keynote that I’ve given, had folks in the audience DM me, asking me out for drinks, just really inappropriate stuff that’s even violated codes of conduct that haven’t been taken care of at the events, like Tracy mentioned dedicated accounts. So I often will block people, especially if they’re harassing me on Twitter and they will just make a new account. And I do not know why that is still possible to do. I have had folks do really interesting, tricky things, like have a really garbage take about me that I call them out on and it’s a white man and they will then change their profile picture to an African-American woman to gain sympathy, just outrageous ways for these men to find ways to either try to scare me or freak me out or try to push me out of the industry. I can’t agree more with what Tracy said about law enforcement and police because having gone to law enforcement with the Tee Medlin situation, this police officer stared at me blankly at the Mission Police Station going, “I’m sorry. So you’re a Twitter person?” Unless something happens, they will not take action. And oftentimes, they don’t live in the same state as you. So getting a restraining order isn’t even an option at that point. Something I personally do because I come from the performing arts and comedy and I am a self-deprecating person is I started being really public about this harassment. So I would often post screenshots of the stuff that I would see. I in creative ways have tried to make light of this very, very dark situation that haunts me constantly every day. It comes through every channel that I have opened and available. But yeah, I think it’s something when you amass a large following on a bunch of platforms, the amount and the rate at which these come at you becomes so much more often. And we all experience imposter syndrome in this industry. We don’t need more additional people to be telling us, “Hey, you don’t belong.” And it takes a toll. It’s a lot of very small microaggressions that have added up to me many times, taking a step back and trying to take stock of, “Is this worth it? Is this worth the amount of effort that I put into it?” And I think yes. And it’s important to be vocal and active against it. But it’s no easy task. I can’t even imagine Tracy, she’s combating it directly, just dealing with it is hard enough.

 

[00:15:14] JL: You both have received several orders of magnitude of more harassment than I do. But even just like the other day, I got an email in my inbox. That was like extremely, extremely inappropriate. And I actually screenshot it to Ben. I was like, “Can’t wait for an episode on Thursday.” It happens to so many people and I just can’t imagine 10,000 X is like what you both experienced. And even just that little incident a couple days ago for me, it threw me in a bit of a tailspin. I was frustrated. It raised my blood pressure. I was just annoyed. This person took up time and space in my emotional energy when they so did not deserve it.

 

[00:15:54] CC: Yeah.

 

[00:15:55] JL: Tracy, can you tell us about Block Party for our listeners who are not familiar?

 

[00:15:59] TC: Block Party, I started with the goal of just giving people more control over their experience online and specifically around the problem of harassment, being able to protect yourself from seeing that. The way it works is you sign up for Block Party, link your Twitter account. We want to go cross platform, but we’re starting on Twitter. You set some filtering rules for what you want to see or don’t want to see. So you could say like, “I want to be in a stricter mode. I need a break. I don’t want to see any like app mentions or replies unless certain criteria apply, like somebody I follow, someone followed by someone I follow, so kind of like within the network and embedded, verified user, someone I’ve engaged with recently and everyone else will be automatically muted.” It makes the Twitter experience a lot quieter because we do it through muting via the Twitter API. It means that you can just use the service still as normal instead of having to go through a new interface. You just kind of do your thing. It’s just a better experience and everything that’s been muted, gets put into a folder on Block Party where you can still review it later. And there’s something really fundamental to this idea of being able to like put it into another place. So you can choose when to deal with it. A lot of people still want to see it. I personally still want us to review it and see what’s there. The reason why we’re on platforms like Twitter is that we want to reach a bigger audience. We want to be able to connect with people who are not just people. We already know. If I wanted to just talk to people I already knew, I have group chats for that. Twitter is a very different thing. It’s nice to have this platform and I do want to hear from people. The problem is when the okay stuff is mixed in with a bunch of abuse and harassment and that feeling of your day being interrupted because somebody decided to say something nasty to you and just because you happen to check Twitter, you’re going to see it and like be thrown off. That really sucks. So by isolating all this stuff into a folder that you can check later, you can still get all the good things, maybe on some time delay, but it was never really that urgent to see it within seconds of something being posted anyways. You don’t have to feel like you might be missing out on something positive, but also like the other thing that’s really necessary about this is for the very negative cases. Sometimes you do need to be aware that there is a coordinated harassment attack happening against you, or in my case, I’ve had to deal with physical threats and people have posted things where I needed to be aware of him. Prior to Block Party, I had those people muted and just didn’t know that people were threatening things, which is very dangerous. So it’s good to still be able to keep an eye on it. But some of the other principles that we built into this we’re very much thinking about centering the person who may be experiencing harassment and what would be helpful to them. One other thing that’s really problematic with the way most platforms treat abuse is that the burden of dealing with it is fully on the person receiving it. So Reddit’s response to me when I got harassed by thousands of trolls was like, “You can go report them.” I’m not going to go through 4,000 accounts to report them, but there’s no way for anybody else to help you. So many times these reporting tools, because they’ve been weaponized and also any anti-abuse tool will end up being misappropriated for abuse. So that’s another thing that's very frustrating to deal with. Because these reporting tools have been weaponized, most of the time, unless you’re making a first person report of harassment against you, the platforms deprioritize the reports. They don’t even look at them. So your friends cannot really help you. That’s super frustrating that nobody else can help you when you are dealing with this deluge of harassment. With Block Party’s design, because we’ve put all this stuff that’s like maybe suspect, maybe not so good, we’ve put it into another folder, you can delegate access to that stuff too, a helper, to a friend, somebody who you trust. You don’t have to give them your entire account credentials or like hand over your phone to them to be able to help you. You can actually just delegate and access part of the issue. Some of the other stuff we’re thinking about is also like, “What do you do to follow up?” As you mentioned, I’ve had to file police reports before, trying to collect the evidence and document it. That’s super painful, also re-traumatizing to have to go screenshot everything. You have to consider things like, “Oh, if I try to get this account suspended, then I’ll lose all the evidence because the account will be gone.” So thinking like, “I need to go screenshot everything before I try to get an account reported and taken down.” That stuff is all really awful to some of what we’re doing because of the design of filtering things into this folder. We just keep the logs there for you. So it’s easy. We make it easy for you to add new things into there, if you need to collect them as evidence. We want to go cross-platform because we know that abuse is very cross-platform and any attempts for companies to solve this, they’re pretty meager right now, but they’re always within their own platforms. There’s very few platforms that are even considered repercussions for people based on what’s happened off of their platform. There are a couple that have kind of thought about it, but not really done very much. But I really think like to solve this problem, truly, you need to have an approach that looks at all aspects of someone’s experience.

 

[00:20:47] BH: I think blocking people for off-platform activity when you can verify it’s the same person is a no-brainer. That’s been our policy the whole time.

 

[00:20:55] JL: The folks who end up emailing me, they don’t know that I ended up searching for their DEV profiles and block them and suspend their entire accounts there because it’s totally not taught. You can’t do that to me. Whenever people report in like, “This person is harassing me on Twitter and on DEV,” we will happily block them, but obviously it’s a very manual process for us right now.

 

[MUSIC BREAK]

 

[AD]

 

[00:21:34] BH: Sick of your laptop overheating every time you try to run your Kubernetes application locally? With Ambassador Telepresence, you can intercept your services on your local machine so that you can develop on your services as if your laptop was running in the cluster. Never worry about running out of memory again no matter how complex your Kubernetes application gets. Ambassador Telepresence is free to use for teams with unlimited developers. Get started today at getambassador.io/devdiscuss.

 

[00:22:04] New Relic’s Application Monitoring Platform gives you detailed performance metrics for every aspect of your software environment. Manage application performance in real time, troubleshoot problems in your stack, and move beyond traditional monitoring with New Relic One, your complete software observability solution. Get started for free at developer.newrelic.com.

 

[00:22:26] To connect with the team behind New Relic directly, join The Relicans. The Relicans is a new community hub designed to help developers create cool projects, inspire one another, level up and learn in public. You can start a discussion about your favorite programming language, ask a question about software observability, share tutorial, and lots more. Join today at therelicans.com.

 

[AD ENDS]

 

[00:22:54] BH: In the past few weeks, when Steve Huffman, Reddit’s CEO, has been sort of on a victory tour about a lot of just like positive underdog, light of WallStreetBets and stuff, they did a Super Bowl ad. So they’ve never done that before. So when you see Reddit on this little victory tour coming off of your experience there, not only from the community but from like the platform, just essentially backing up the community, this harassment, how do you take that?

 

[00:23:22] TC: How do I feel about this? Well, yeah, since it’s a podcast, you can’t see me rolling my eyes, but it is very angering. It reminds me of how broken the industry is that it’s the same sorts of people, mostly white and Asian men building these platforms, not solving these problems, never prioritizing any of these issues. If we get to keep working on them getting lauded for what they’re building, investors keep plowing money into these platforms, never questioning what are the potential negative consequences of platforming some of this stuff, and truly just ignoring the experiences of so many marginalized people and how much trauma we’ve had to suffer for them, It’s really enraging. I mean, Steve Huffman was going on to Clubhouse to talk about how they have this kind of moderation tool around a metric of daily active, shitty people, I don’t know. I forget what the term was. But just the way that he was describing it made it very clear that he does not have any thoughtful opinions about moderation, how to think about community, and kind of like crowing about it on Clubhouse, yet another platform that has had pretty bad issues around abuse and they have pretty willfully ignored advice from people, including myself. I spent many hours giving them advice, which they have ignored. The people who have funded Clubhouse don’t care at all about this harassment. In fact, Marc Andreessen is very famous for blocking lots of women and other people, but especially a lot of women, and they really just don’t care about pouring gasoline on the tire fire. And if we’ll go back to Reddit, I’m pretty good friends with Ellen Pao who was interim CEO at Reddit for a little while and tried her best to clean up the harassment. She tried so hard. It was a valiant effort and was essentially kicked out of the company for trying to clean up the harassment. Alexis Ohanian threw her under the bus for trying to clean things up and it’s just pretty rich when you think about how much awfulness Reddit has caused with the subreddit, the_Donald, like would not de-platform that shit. To see them talking about how great things are is really, really, really irritating. I almost lost it when I had my Reddit AMA experience and had thousands of trolls descending upon me. And I have friends who worked at the company in pretty senior roles that refuse to acknowledge what was happening. Despite mutual friends reaching out to them, being like, “Hey, are you seeing this?” They just would not acknowledge any of this was happening. The continued gaslighting and blaming me for it and then the performativeness of people like Alexis Ohanian, co-founder of Reddit, talking about like, “Oh, this is so terrible that you’re dealing with this.” It’s like, “Well, you could do something. You could have done something. You still could do something. But you won’t. You’re still friends with the board members. You’re still friends with people who are in leadership and you literally will not do anything except be performative.” That is the whole ecosystem that we live in.

 

[00:26:39] CC: I think about the term “facts not feelings”. I think you can feel that your platform helps protect women and marginalized folks, but that’s a feeling. Not a fact. I think data is really, really important when we’re dealing with these things. Personally, I have a lot of friends on the inside at Reddit who work really, really hard on teams specifically to help with these things. But myself working at a large organization as well, it’s tough internally to get these things noticed and listened to and to be that voice, that squeaky wheel in the room. I think a lot of the quote from Hamilton, like being in the room where it happens, because if we don’t have these people in the room speaking up, being the noisy person saying, “Hey, this is bad, we need to change this.” The example I love to give is “fleets” on Twitter. If there’s one LGBT person in that room, it would not be called “fleets” because “fleets” is a douching product term that is commonly used amongst the gay community. So having those people in the room, like one of these products are being designed when these decisions are being made. If you’re looking at your harassment team or like your customer safety team and there are no women or people of color on that team, take a step back and think about what’s happening there because you need people in the room, like even talk to your users. Figure out how your platform is being misused. Making the internet a safe space for everyone is a hard job, but it must, must, must, must be done because we are going to lose a lot of really, really interesting, amazing voices on these platforms if we don’t do something about it.

 

[00:28:16] JL: So Tracy earlier, you mentioned that you implemented the block button on Quora. What was it like to be an early voice in that room? Was that purely reactive based on the experience that you were having with harassment there? Or was it in their roadmap already and having you there as an early voice made a difference for that company?

 

[00:28:38] TC: Yeah. We built a block button because I wanted to build it. It was not on our roadmap, but we were so small at that time, like five or six people. There was no roadmap. It was just build what you think is most important. And I really wanted to build this. So I built it. And that experience was actually a pretty big inflection point for me, understanding how important it was to be a part of building these things from that personal experience or it’s like if I weren’t here, this would never have been prioritized so early. The only reason why this got built is because somebody was bothering me. And I really wanted to make him the first person ever blocked on Quora. But this came up again and again in other aspects of company, product, decision-making. I wanted to respond to one thing that Chloe mentioned earlier, which is if you’re looking at your teams doing trust, safety, anti-spam or whatever and there’s no women on them, you definitely have a problem. I would extend that to you need representation across all of your teams because if you have people who are working on your home feed, distribution algorithms and recommendation algorithms weren’t thinking about what they’re giving prominence to on the platform. If you don’t have people thinking about this in terms of the product design, how easy is it for people to respond and comment? Does a UI encourage people to have really thoughtless quick fired off responses? Or is it something that nudges people in a different way? All of these different product decisions all throughout your product matter. Do you want to build in DMs that allow anybody to message anybody else? That is a pretty big product question and that shouldn’t be something that’s siloed on a team called like the moderation team. This is something that the entire company needs to be thinking about. And one of the things that really actually frustrates me about the way people talk about platforms and these problems of abuse or misinformation, all the other ways that things go wrong is that they think about it as a content moderation problem as if it can be siloed into just catching bad content or maybe bad users really it’s something that goes throughout your entire product, like the culture and DNA is pervasive through the entire product. You’re looking at not just this question of looking at individual pieces of content and saying like, “Is this okay or not?” You’re looking at the community norms of how people are going to use every bit of surface area of your product to potentially harass others or spread misinformation. And I think this framing of it with tech companies so often is a big part of their inability to solve it because they think it’s just going to be this question of labeling pieces of content as okay or not. And in the best case, they have machine learning to classify it. And when the machine learning can’t do it, they farm it out to underpaid, traumatized content moderators who are forced into an impossible situation of reviewing as much terrible content as possible with an ever changing set of rules with no context because they don’t know where this is coming from. They’re set up to fail because this whole construct is broken and they’re getting completely traumatized. And the big tech companies are just like, “Oh, but we’re working on it. Our machine learning picks up 95% of this.” That’s not the point. Your whole product needs to be considered. I think the whole framing of it is wrong. When we think about these platforms as really providing an online society, like a new space for interaction. And when we think about offline society, like offline world, we’re not just policing the quality or content of what people are saying. The totality of people’s experience in society is relevant and we have governance structures that cover all of it. Not just rules that say you can’t say hate speech.

 

[00:32:19] CC: Yeah. Like the whole thing about automation is so crazy too, right? Like to rely just on automation. Personally, I won’t out them on the show, but I was on a website that censored my name. My last name is Condon. It’s very similar to condom. My name was censored in every chat. But someone said, “Suck my dick.” And that was not censored because the word suck was not censored and dick can be someone’s name. So people need to be in the room, making these decisions. And it’s just like Tracy said. People have no context if they are being reviewed by a human. We have a long, long way to go with AI and ML to determine what is good and what is bad. But we can’t do anything either. It’s this weird middle ground where as an engineer, I empathize only slightly with these engineering teams. I’m like, “Look, it’s hard to account for every single piece of harassment that comes my way.” I put on my engineer hat when I think of Twitter or even Reddit. And I think, “You know what? There’s a lot of ways that these tools would not catch what’s happening to me.” However, and I’m sure Tracy feels the same way, myself and many, many other people are so loud about how these are affecting us and how these things are being abused and we see nothing being done. We’re providing solutions. We’re giving examples and there’s no change. So yeah, it drives me crazy when we rely on these filtering tools to be a hundred percent of our moderation on websites because it doesn’t catch everything and oftentimes it catches the wrong things.

 

[00:33:52] TC: It’s just really hard to get machine learning to capture this. The harassers, abusers will find ways around machine learning algorithms. It’s so hard to capture intent. There’s some people who try to do things with profanity. It’s like, “I don’t care about profanity. I care about abuse.” And I get stuff like, “Hey, don’t accidentally eat your dog.” That’s not going to be caught by any machine learning, but it is super racist and problematic and I don’t want to see this and it troubles me. My profile photo on Twitter used to be an illustration of me with a corgi and I’ve taken it down because seeing it reminds me of that really horrible comment. But that’s the kind of thing that like your machine learning algorithm is not going to catch that. The words are all fine, eat, dog. So totally okay things. But how do you capture that kind of intense? There was a harassment campaign a while back against a journalist that was just learned to code. What machine learning algorithm was going to catch that this was going to be a new harassment factor? I mean, I agree, there’s definitely a role for automation that can help, but I think we’re just entirely framing the problem incorrectly. One of the things I really want to see is platforms to implement that are constructs around safety, like muting and blocking. I’m really glad that Twitter has these things and many other companies also have block or some version of mute and different ways that people can protect themselves. But I think what’s also really important is that they continue developing more sophisticated tools. Just implementing block or mute is not enough. There’s many more things that need to be implemented. And I am somewhat biased in saying, since I am building a company that’s on top of APIs, but I think they really need to open these APIs up so that other developers can build constructs on top of these. So the way Block Party works, if we can help you automatically mute people, but just saves a lot of trauma. You don’t have to see all those things and then mute them after you’ve seen the bad stuff. So with these constructs in place around safety and then exposing them to third-party developers, it makes it possible for other people to start building solutions as well. I’m very frustrated by some platforms like Reddit where they won’t build a tooling and they don’t make it possible for anybody else to build a tooling either. So I think there’s still quite a bit of responsibility there and just to build basic moderation tools, which I know moderators have been asking Reddit to do, and they just won’t prioritize in the product roadmaps. It’s not important to them. They can definitely build some of these tools that don’t need to be automations. It’s like give moderators more power to do things, but also open it up to other people so they can build on top of it.

 

[00:36:28] CC: And there’s also things that I’m sure to the everyday person seem like nothing that can be really terrifying to other people. A lot of women I talked to about the fact that on Twitter, someone who has a private account, can quote retweet you and you can’t see what they’ve quote retweeted. So a lot of times I’ll have women say like, “Hey, is this person blocked?” Or like, “What’s going on?” And I’m like, “Oh, no, no, no, that’s a private account.” And there’s all sorts of ways of harassment there. Someone could make a private Twitter account. That’s like the I Hate Twitter Account and quote retweet every single one of my tweets with harassing, abusive stuff and I would never be able to see it. So there’s a lot of really weird, just like we were saying before, sidestep-y ways of figuring out how to get around these tools. I think it’s important much how we do testing on our own software to do testing of how people can misuse and use these tools for evil. Unfortunately, and I have to remind myself this all the time, people who are mean on the internet don’t walk around like a villain with a vampire cape. They’re not super easy to spot. They’re all around us and they can do this in anonymous ways. And I think we really have to think worst-case scenario when we’re building these communication tools, especially if we are going to have people on them. I hope that your product has women and people of color on it. If not, maybe take a step back and think about that. But if you’re building good software, you’re building a product that all people can use and you need all people in the room building that software. Hands down.

 

[00:38:00] BH: So at some point, for these things to happen, you need buy-in from people with money, from VCs, from whatever. And if the VCs are sitting there and they’re like me who don’t get harassed and they’re thinking that like, “Oh, my experience is the internet,” they’re going to sit there and say, “Why do we need Block Party?” Did you run into pushback on the principles of the need for Block Party as you were getting, going with it? And if you hadn’t already had a bit of a platform to maybe build up a reputation around this, do you think it might’ve been different?

 

[00:38:34] TC: I absolutely ran into pushback and I am so frustrated by the systemic bias that exists in our industry where I was getting panels of white male investors telling me that abuse is a very niche problem that the platforms are already solving with machine learning. Right? I almost lost it in some of my panels. It’s so frustrating to be gaslit in this way, whereas I can speak to the experience from firsthand harassment. I’ve built tools for this. I have a master’s in artificial intelligence and machine learning. I have built moderation tools with machine learning. I know exactly what I’m doing. And even then I would get this kind of pushback. I did raise a small pre-seed round, which I’m very grateful to the investors who backed me. They’re largely people of color and women who understand this. At the same time, I was getting a lot of questions from people like, “Is this really a market? Is this really a problem? We don’t really see it.” And to be fair, any startup founder who’s pitching, you have to be able to sell that vision, the market and convince investors that people will pay for this. I mean, from my perspective, it was pretty obvious that people would pay for something like safety, but I think about how much money I spent in therapy, like, yes, I will pay for something that gives me a little bit of my mental health back. I’ve talked to lawyers about filing restraining orders. Do you know how expensive that is? There’s a lot who are willing to pay for their own safety, but okay, fine. I need to convince these investors. I honestly just think because I am a woman and a person of color, it’s not so bad as an Asian, but I think there’s still bias. It just made investors not want to back me because I’ve now seen white men who are non-technical, who do not experience harassment, have never worked on harassment and raise 10 to 15 times as much money as me. Clearly no questions asked about like, “Is this a market?” It’s just like, “Yes, we will throw money at you because you are working on an important problem.” And this drives me crazy. Some of these companies that are run by white men who don’t experience a problem are straight up copying Block Party, but they have 10 times the resources. And it’s so frustrating to watch them. One of them launched a pivot where they were previously B2B and they’ve now pivoted to copying us. It’s so clear even from their marketing videos or launch videos that they don’t understand this because their marketing video is showing a bunch of abuse. I was like, “I don’t want to see this. This is traumatizing to see, like.” If I actually wanted a product to deal with harassment, I don’t need you to show me more harassment to prove that this problem exists to me, but they don’t get it. At the same time, they have more than 10 times as much money as I do to hire people to build their product. And so they’re just going to straight up rip off of a woman of color who’s building this out of her own pain because they’ve seen a market opportunity and white male VCs will continue to back them.

 

[00:41:32] CC: And I think Ben, in what you said, you bring up such an excellent point that there are not enough women or people of color in leadership positions, like in VCs, on these teams making the decisions. And that’s why I get so excited on Twitter when I see black women saying, “I am starting a VC fund, I am doing all these things.” I’m like, “Please, please, please. We need this. We need the people with money who are making decisions to be the people that these products affect.” And it comes back to being in the room. And as women, we go into a room and we’re immediately not taken seriously or at face value. There’s a great podcast that I listened to on I Weigh with Jameela Jamil and Reese Witherspoon, iconic Elle Woods, right? We think Elle Woods walks into a room and can get anything that she wants. But even in the film industry, her trying to pitch big little lies and all these things, it sucks. The fact that my friend who works in retail, I think they’re doing a Netflix series around this, had to make up a fake male co-founder to answer their emails to get taken more seriously. That speaks volumes. And the fact that I have male friends on Twitter who will change their avatar to a woman for a month and see the change and abuse happen instantly overnight. I mean, this is why these people need to be making these decisions.

 

[00:42:52] TC: One of the things that really gets me also is that I am extremely privileged and I really check basically all of the boxes that Silicon Valley is into, which is like a lot of luck and also a lot of hard work to try to acquire all these credentials. I’m an engineer. Silicon Valley loves technical founders. I can build my own product. I have been building my own product in the tech lead. I’ve worked at Google, at Facebook, and at two really early stage startups that became unicorns and one has IPO’d. I’ve done all these things. I have two engineering degrees from Stanford where I graduated with honors in the top 5% of my class. I have all of these things that in theory Silicon Valley wants and I’m building a product that solved my own problem that I’ve worked on in the past at previous companies, like basically every single thing you could ask for. And even then I run into this many obstacles. I just think like, “How awful must it be for people who do not have all of these advantages I have?” I’m basically from central casting like what you would want from the founder, except for the fact that I am a woman. I cannot imagine what it’s like for people who don’t have all of this luck and privilege that I have. I’ve been able to raise money. I’m very grateful to even be able to take a stab at building this. There are so many people who never get this, the shot at all. But man, just to see how much more qualified I have to be to have this opportunity. When I compare my background to some of these other folks for working on anti-harassment because it’s now hot and can raise 10 times as much money as me, it’s really, really grading. I have to have the perfect resume to barely be considered. There’s like many other stories along the way of how awful it’s been. As a female founder, some of the things I’ve had to put up with in terms of hiring, like one person who accidentally emailed me his diary full of sexist thoughts about me. Can you imagine that people do this? There were just so many obstacles along the way. It’s not even just like I’ve gotten to this point. I get to build like every single thing that I’m trying to do to try to make this problem go away for not just myself, but like so many other people who should be able to participate online safely and have a voice, especially like women, minorities, people from marginalized backgrounds, activists, journalists, politicians, the people whose voices we really need to hear to try to have a better tomorrow were getting silenced. I’m just trying to work on this problem. It’s so difficult because everything structurally is against me, despite all of the advantages that I have. I just find it so incredible that this is a situation we’re in.

 

MUSIC BREAK]

 

[AD]

 

[00:46:00] BH: Chances are, like other software developers, you learn better by doing than just watching. Unfortunately, most online learning platforms still have you passively sit through videos instead of actually getting your hands dirty. Educative.io is different. Their courses are interactive and hands-on with live coding environments inside your browser so you can practice as you go. They’re also text-based, meaning you can skim back and forth like a book to the parts you’re interested in. Step up your learning in 2021. Visit educative.io/devdiscuss today to get a free preview and 10% off of an annual subscription.

 

[00:46:36] A common scene in technology companies everywhere, big conference table with the CTO on one end, developer teams on the other, the showdown. We have an idea, “Will it get funded?” More companies are feeling the pressure to go faster and stay ahead of the competition. Projects that have long timelines or no immediate impact are hard to justify. DataStax is sponsoring a contest with real projects, real money, and real CTOs. If you have a Kubernetes project that needs a database, the winner will get funded with a free year of DataStax Astra. Follow the link in the podcast description to submit your project. It’s time to impress the CTO and get your project funded.

 

[AD ENDS]

 

[00:47:20] BH: So along the way, you’re building Block Party, how’s it going from an execution technical perspective? Have you found some components of this more challenging to implement and to get right?

 

[00:47:33] TC: I think the hardest part for us technically is actually just building on top of another platform, even though Twitter has been pretty friendly to us. And it seems like there has been kind of a sea change within Twitter in terms of wanting to support third parties work in anti-abuse. It’s still hard to build on top of an API and we have to deal with rate limits and figure out traffic shaping. We’re synchronizing a lot of data from an external data source. And for our users who have a lot of not pretty big accounts that get a lot of mentions, that’s basically already like consumer web scale for a lot of stuff that we have to process. So for a lot of new startups, you’ve got to work on a smaller system at first and get to scale up slowly. We already have to be processing Twitter scale data for folks in order to make decisions like whether or not we’re going to mute somebody or take action. We have to synchronize their entire mute list, their block list, and a lot of data over. So there’s a lot of scaling challenges already and then just like working around the APIs and what they support. For example, things like synchronizing block list is difficult because a lot of people do use mass blocking tools and people who experience a lot of harassment use mass blocking tools, which means that their block lists can be over a hundred thousand long. If you were to naively try to query someone’s block list, you will run into the rate limit, like the 15-minute when you run into the rate limit. So you have to do a lot of massaging of the API queries and figure out like, “How much data staleness can you tolerate?” All of that stuff is pretty tricky. And so I feel like we’ve had to spend a lot more effort on the systems architecture and scaling our interactions with Twitter, managing like much larger databases than I think one might naively expect of a new product that’s out there.

 

[00:49:20] BH: Do you have distinct principles over how to avoid false negatives versus false positives and what the correct type of engineering mistakes to make in that regard as you’re building out? Because I’d imagine, if it goes too hard, people might get frustrated. And if it doesn’t go hard enough, people might question what’s going on. How’s that been working out?

 

[00:49:42] TC: Yeah. So I think a pretty key insight is that people want to build and configure what that threshold is and they’ll have different tolerance for false positives or false negatives. And because of the construction of the folder, where everything still ends up being visible to you just at a later point, it’s actually okay to have mistakes because the consequence of a mistake is not very great. You’ll still see it. For me personally, for example, like I set my filtering pretty high because I don’t want to see potentially bad stuff and I’m okay with over filtering. I’m okay with a lot of stuff that’s actually okay going into my folder and me seeing it a few days later. So we don’t have to get it right in the sense that like a company would have to get it right if the action that is made based on that label is like, “We completely remove the content or censor it,” or, “We let it be.” So because we’re giving people the ability to choose and the consequences of getting it wrong are not very high. It’s actually totally fine that we have heuristics. They might look like kind of dumb heuristics. It’s like, “Oh, is this person a verified user? Do I let them through or not?” We can actually use really simple heuristics. We don’t have to invest in really fancy machine learning, which we’ve already talked about earlier. It’s like not being that good anyways, but the heuristics we use actually work really well for a lot of folks, if you’re okay with some over filtering or whatever it is.

 

[00:51:02] BH: So you’ve built some of that tolerance for error into the overall design of the system, which sort of speaks to everything we’re talking about in terms of this being a problem you need to proactively design the system for not throw things over the wall to a machine learning algorithm to solve for problems that’s set up by the design of the system in the first place. Would that be an accurate thing to say?

 

[00:51:28] TC: Yeah. I think that’s a really good way of putting it and I think that intersection of engineering with product is one that companies don’t look at enough. Sometimes they’ll just frame it narrowly as an engineering problem and like, “This is a classification problem. We must get it right.” When really if you tweak the product parameters, you don’t have to get the engineering exactly right and it’s still fine for the users. But I think there’s not enough people thinking about things holistically and they so narrowly scope the problems that they end up solving the wrong ones.

 

[00:52:00] JL: What is the next platform you are looking to tackle?

 

[00:52:03] TC: We need to do more research on all the different platforms to make sure that this is the right one, but my strong intuition is that Instagram would be the next, because the model is pretty similar in terms of like comments on posts and they have APIs for hiding and un-hiding comments that could work pretty similarly to muting and un-muting on Twitter. Facebook, for example, is a bit more complicated. You have all sorts of good privacy settings. You have groups. It doesn’t extend as naturally, but when you have a very simple like mostly public and mostly influencers trying to build a platform and generating a lot of content, then it works pretty well.

 

[00:52:35] CC: I always think it’s so funny when I’ll talk to men or even just family members who say, “Well, why don’t you just make your account private?” I’m like, “Well, a big part of my job as a public figure, literally my job is developer relations and I use these products to help showcase the work that I’m doing. Making it private or invite only is not the most accessible way to get my content out there.” And I just love that, especially for folks who don’t experience this kind of harassment at the rate that we do that the answer is always, “Well, there’s tools in place. You can go private. You can do an old account.” No. People are constantly telling me and I’ve said so many times on Twitter, “This is the week I’m going to close my DMs,” and I don’t want to do that. I want to be able to still communicate with my friends through DMs. I get a lot of inbound requests through my DMs. Why would I want to not use this piece of the product so that everybody else can use? That’s what I think all the time. The tools and the things that we put in place to combat harassment shouldn’t be a taking away for the folks who need to use these. It should be, “Hey, here’s a feature on top that can help filter or mitigate this for you or remove these harassers from the platform.” But I just always throw my head back and laugh at these suggestions of, “Oh, privatize everything.” Oh yeah. As someone who is looking to get my content out there, I’ll just make it private. No.

 

[00:53:59] TC: Yeah. I’ve had people tell me, like, “Why don’t you just get off of Twitter?” I’m like, “That’s not really an answer.” It’s the same people’s norms when I complain about street harassment, they’re like, “Just don’t go out on the street.” I’m like, “Well, I kind of need to get from place to place. So I would like to be able to go on the street and I would like to be able to participate in our digital spaces and be online and access what everybody else can access.” So I don’t really feel like it’s a solution to just tell people to get offline or just ignore the trolls or just whatever it is. It’s also a form of gaslighting, I think, when people are like, “Oh, it’s not that bad and you should be able to deal with it.” Not really acknowledging the extent of the problem and how pernicious the effects are of silencing people or sort of psychological toll that people have to deal with just to be able to exist online and participate and do the things that they’re trying to do.

 

[00:54:49] CC: That’s a great analogy, too, the kind of street cat calling thing, because I work with a bunch of co-workers at Microsoft. Microsoft is huge. Right? I can tweet something “spicy” or funny or comedic and it will get a completely different response than if a male coworker had tweeted that. Right? And I think it’s the same kind of question that we would get on, “Oh, I was walking down the street and someone cat called me.” Well, what were you wearing is always kind of like a go-to question for people. And I think it’s the same thing with Twitter. “Well, what did you tweet?” And it’s like, “Well, does it matter?” This shouldn’t be happening at all and I shouldn’t have to censor myself in order to avoid harassment. I should be able to tweet the same things that my male coworkers are tweeting. But unfortunately, that’s not the case. So yeah. Great advice to any allies out there looking to offer support, please do not tell all your non-binary or women and friends to just privatize things because I would like to experience the same version of the internet that you do at the end of the day.

 

[00:55:51] JL: Thank you both for sharing. I know it’s like a tough topic just to have to even think about.

 

[00:55:56] CC: Thank you.

 

[00:55:56] TC: Thanks for having us. It was so good to have this conversation.

 

[00:56:07] JL: I want to thank everyone who sent in responses. For all of you listening, please be on the lookout for our next question. We’d especially love it if you would dial into our Google Voice. The number is +1 (929) 500-1513 or you can email us a voice memo so we can hear your responses in your own beautiful voices. This show is produced and mixed by Levi Sharpe. Editorial oversight by Peter Frank and Saron Yitbarek. Our theme song is by Slow Biz. If you have any questions or comments, please email [email protected] and make sure to join our DevDiscuss Twitter chats on Tuesdays at 9:00 PM Eastern, or if you want to start your own discussion, write a post on DEV using the #discuss. Please rate and subscribe to this show on Apple Podcasts.