Is this the signal to move off of WhatsApp?
In this episode, we have updates about more Apple App Store drama, and Apple’s planned surveillance features to battle child sexual abuse material. Then we speak with Lydia X. Z. Brown, attorney, disability justice activist and policy counsel for the Center for Democracy and Technology, about a study that found that automated resume filter tools exclude millions from jobs, including those with disabilities. And then we speak with Craig Silverman, reporter at Propublica, about an investigative piece he co-authored that details how Facebook undermines its privacy promises on WhatsApp.
Saron Yitbarek is the founder of Disco, host of the CodeNewbie podcast, and co-host of the base.cs podcast.
Josh Puetz is Principal Software Engineer at Forem.
Lydia X. Z. Brown is an advocate, organizer, strategist, educator, writer, and attorney working for disability justice and liberation. For over a decade, their work has focused on building solidarity-based communities and addressing the root causes of interpersonal and state violence targeting disabled people, especially those at the intersections of race, class, gender, sexuality, and violence.
Craig Silverman was the founding editor of BuzzFeed Canada and then became the media editor of BuzzFeed News, where he pioneered coverage of disinformation and online manipulation. He joined ProPublica as a reporter in May 2021 to continue his investigative work.
[00:00:10] SY: Welcome to DevNews, the news show for developers by developers, where we cover the latest in the world of tech. I’m Saron Yitbarek, Founder of Disco.
[00:00:19] JP: And I’m Josh Puetz, Principal Engineer at Forem.
[00:00:22] SY: This week, we have updates about more Apple App Store drama and Apple’s plan surveillance features to battle child sexual abuse material.
[00:00:30] JP: Then we speak with Lydia X. Z. Brown, Attorney, Disability Justice Activist, and Policy Counsel for the Center for Democracy & Technology, about a study that found that automated resume filter tools exclude millions from jobs, including those with disabilities.
[00:00:44] LB: We have to remember that machines never exist outside of social and cultural context. They are not neutral or objective and they never can be.
[00:00:52] SY: And then we speak with Craig Silverman, Reporter at ProPublica, about an investigative piece he co-authored that details how Facebook undermined its privacy promises on WhatsApp.
[00:01:03] CS: One of the things to me that really stood out about this whole large infrastructure is that WhatsApp and Facebook actually refuse to even call it content moderation.
[00:01:12] SY: So we’re starting this episode off with a couple of updates about Apple, which are connected to things we talked about last episode. So last episode, we talked about an Apple App Store settlement where the company will pay a hundred million dollars’ worth of payments to app makers and will also allow app developers to promote alternative payment methods that can circumvent Apple’s commission, but still not in the apps themselves. Well, Apple is now making another small, but slightly more substantive concession with app developers, at least in Japan. The Japan Fair Trade Commission had been investigating the App Store for being in violation of their Antimonopoly Act, but said, it’ll stop this investigation in exchange for Apple allowing “reader” apps to include an in-app link to their website for subscription signup, which could potentially let users circumvent Apple’s 30% commission. So if you’re wondering what categorizes a reader app, they are apps that let users go through previously bought content or content subscription. So think Netflix, Kindle, Spotify, and a bunch of news apps. So Josh, what are your thoughts on this?
[00:02:19] JP: Working on the Forem iOS app, I was very interested in this. We had to do something similar where we’re not allowed to let users sign up for accounts in our app. We have to redirect them to another website. And if anyone’s ever opened up the Netflix app on iOS, you see this really strange screen that refers to this, but can’t actually link you to the website and can’t actually explain what’s going on. It just says, “You can’t sign up for an account here. Sorry.” That’s literally what it says. So I was very interested in this ruling to see, well, maybe with the Forem app, we could be classified as a reader app and we could sign up for this. Well, digging into it a little bit more, you, as an app creator, do not get to apply to be a reader app. There’s no box you get to check. Apple is the one that gets to say you’re a reader app, and surprise, surprise, Apple basically leaves that designation for the largest apps, Kindle, Amazon, Spotify. So I don’t think that the rank and file app developers are going to benefit very much from this. This definitely seems like it’s something that Apple is going to apply to their largest partners and that’s about it.
[00:03:29] SY: So yes, you’re absolutely right. I remember assigning recently for Libro, which is an Audible alternative. So they sell audio books, directly working with independent bookstores. And I got that message. It said, “You cannot…” It wasn’t make an account because I think I was able to make an account. I think it was, “You cannot buy an audiobook.” And that was it. And I was like, “Well, what are we doing here?”
[00:03:51] JP: What now?
[00:03:52] SY: Yeah. I was so confused. I was so confused. And eventually, I figured out, I was like, “Oh, it’s that thing where I have to go to the website,” and blah, blah, blah. But it took a minute. I was like, “Wait, what’s the point of this app?” So it was really confusing. So I’ve definitely seen that, and it does cause a lot of confusion. And it is really helpful to just have just one more sentence that says, “Please go to the website.” It’s really all you have to say. So that would clear up a lot of confusion, but there are a couple of things that come to mind whenever we talk about these Apple App Store settlements and these issues. The first thing is, in this case, we’re talking about the Antimonopoly Act. And it makes me wonder, at what point does that kind of kick in? Because right now there’s really two options. Right? There’s Apple and there’s Android and that’s kind of it. And Apple does not make up a majority of the phone industry actually. I think it’s like 20-25% or something like that.
[00:04:46] JP: Yeah. It depends on which stat you’re looking at, but yeah.
[00:04:49] SY: Yeah. Yeah. But it’s less than 50%, definitely not a majority. And so it makes me wonder, is it that Apple, even at let’s just say 25%, is it that Apple has still too big of a share at that 25%? Or is it just that there’s only two competitors? If there were four competitors, do you think this would still be an issue?
[00:05:08] JP: And I also think there’s a third competitor in this case. You could say like the competitors would be Apple, Google or going directly through say Libro or Netflix or whatever.
[00:05:19] SY: That’s true.
[00:05:20] JP: And I think that’s really what the Antimonopoly part is getting at, that you are locked in to paying Apple for these services and for signing up and for transacting business. You don’t have the option to stay on the platform, but directly pay someone else. I think that’s kind of what they’re getting at. I also think it’s a case of there’s just so much money going through the Apple App Store. Study after study has shown that on average iOS users spend a lot more money with in-app purchases and on the App Store than Android users do and that could be from a variety of reasons, socioeconomic class of where the phones are being used, what country they’re popular in, et cetera, et cetera. But I think that is what’s really causing governments and agencies to continue to look at Apple and say, “You have so much money coming through you.” Yes, you’re only a small portion of the market, but if you take a look at the portion of digital purchases, you’ve got to imagine Apple is very near the top.
[00:06:26] SY: So do you think that it is…? Because that’s a good point, when we talk about dollar amount, revenue size compared to Android, it’s definitely a lot bigger. So maybe that’s kind of where the monopolistic feel kind of comes from, not necessarily how big is your user base, but how big is the market of digital spend when it comes to mobile devices and mobile apps.
[00:06:50] JP: Yeah. And I think there’s been a lot of talk about like the 30% commission. Is that fair? Is it 15? I think it’s just the idea that these companies that are producing content, whether they be Netflix, Kindle, Amazon, Spotify, huge companies, or even small companies, like the Libro app that you talked about, something independent trying to make a go of it. They’re all on a spectrum, but they’re all at the mercy of what Apple decides is the percentage they want to take, and that control, that decision is taken out of those individual businesses’ hands. I think that’s the real crux that a lot of governments are getting at like, “Hey, why? If Libro wants to be competitive and scrappy and sell their stuff at a loss or control how much of a percentage they take, they don’t get to do that because they have to go through Apple. They have to give up 30% to Apple.” And I think that’s really where the complaint is.
[00:07:47] SY: So tell me how you think this is. Because one thing that I struggle with is I think it’s just Apple’s market cap, frankly, that I think causes so much concern. Because I do think if they were a smaller company doing the same thing, I don’t think anyone would really care very much. But it makes me think for a while, Walmart, for example, as one of the biggest retailers and now they’re kind of dwarfed by Amazon, which is incredible to say. I can’t believe we all used to hate Walmart, I think it’s a little bit different. Well, really just any retail that 50% fee is bigger than the 30% fee that other retailers and folks buying or selling physical goods that they have to pay. They’re giving half, literally half of their revenue is going over to these physical retailers. So do you think it just feels different because it’s digital or is it because it’s fewer competitors? What do you think kind of explains that difference in attitude between the physical goods and the digital goods?
[00:08:46] JP: So Apple likes to bring up that 50% retail fee a whole lot.
[00:08:49] SY: Yeah, exactly.
[00:08:49] JP: They love that.
[00:08:50] SY: They do. Yeah.
[00:08:51] JP: So here’s the difference. You could leave that store and go across the street to another store. Or you can go buy directly from another person. If you’re a supplier, you could choose not to sell that store. You could go on the internet and sell directly to the consumer. Let’s talk about direct to consumer. There’s no direct to consumer with Apple. I can’t decide to take my bespoke digital goods and directly sell them to you. I have to go through the App Store, even if we know each other, even if we’ve already found each other and we want to have a business relationship. We can’t do it. We have to go through Apple. They’ve constructed digital certificates and authorizations and everything that if I want my content on an Apple phone, I have to go through that App Store. It’s the only game in town. And whether you think that’s a monopoly or not, I think depends on whether you view that there is a viable alternative to the iPhone for using a mobile platform.
[00:09:52] SY: Yeah. And that’s the big question is if there’s only two or three, like you said, you can go directly to the website and maybe use the web app. Well, either use the web version or just download and do that subscription online on a website, how big do you need to be to be a problem?
[00:10:10] JP: Yeah. I think you’re absolutely right. It’s because of their market cap and it’s because of their audience. I think it’s a combination of the audience that spends a lot of money. That’s what everybody wants access to. I mean, we talked about the Freedom Phone, for example, a couple of weeks ago. If the Freedom Phone came out and had a very similar setup where they had a lockdown app store, I don’t think anyone’s going to care because they don’t have a ton of users and they don’t have a ton of users that spend tons and tons of money Apple does. And that’s why everyone’s chomping at the bit to try to deregulate them. That’s my guess.
[00:10:46] SY: Yeah, I get that. Yeah, I totally get that.
[00:10:49] JP: Well, our second update about Apple is about the company’s plan to introduce surveillance features in iOS 15, as well as on the iPad, Apple Watch and Mac, which would allow for automated scanning of iCloud photos and iMessages for potential child sexual abuse material. Well, following a slew of security and privacy concerns about the planned new features, including a scathing piece by NSA whistleblower, Edward Snowden, Apple has decided to delay its rollout. In a press release, the company stated, “Based on feedback from customers, advocacy groups, researchers, and others, we decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.” Do you buy it, Saron?
[00:11:33] SY: I don’t know. I watched it in preparation for our last conversation last week or our last interview. I watched the 45-minute technical explanation of the scanning. I read the tech articles. I saw the interview. I looked at all the material that Apple put out in response to the backlash that explained what they were doing. I genuinely think that they think this is a great idea. I really do believe it. You know what I mean?
[00:12:03] JP: Oh, they’re very proud.
[00:12:03] SY: They’re very proud of it. Because at first, I was like, “This feels sneaky. Like, “Is this just the excuse they’re using?” You know what I mean? I was very skeptical. I was very pessimistic in the beginning, and after watching it, I really do believe that they think this is a good idea. I genuinely think they are surprised at the backlash. I think that their intentions are well-placed. So I do believe that they are, I don’t know if reconsider is the right word, but I do believe they are taking a step back and at least reevaluating. I was talking to a couple of people and I had a couple friends say, “Oh, they’re just trying to make it look like they’re listening, but really they don’t really care.” You know what I mean? They’re just kind of waiting it out.
[00:12:44] JP: Right.
[00:12:44] SY: They’re waiting out the storm, and then in a couple months, when the shouting has died down and people aren’t as upset about it, then they’ll bring it back with no changes. I wouldn’t be surprised if no changes happen, but I do believe that they are genuinely thinking about it and trying to figure out is there a different strategy, different safeguards they can put. I do think they’re trying to figure out how to make the public happy.
[00:13:10] JP: I agree with your friends that it does feel like they want to just wait this out. I really don’t think they’re going back to the drawing board and reworking the feature or trying to think of a way to like make it less on device. Something that I’ve read about was that this feature makes a ton of sense, if Apple gets into the business of completely encrypting iCloud information and completely encrypting iPhotos, right? If they were to roll out a feature and said, “Okay, this is it.” Like everything on iCloud is encrypted, governments, we can’t look at any of it. It’s all encrypted. We can’t see any of it. Then they can’t scan iCloud messages. They can’t scan mail messages and iPhoto for CSAM anymore and then this on-device scanning makes a ton more sense. I wonder if this feature is kind of like neck and neck with a plan or an eventual plan to encrypt iCloud data and it just was done too early and they’re just going to wait on it until iCloud encryption comes along or maybe they’re just wondering whether they should be iCloud encrypted all anymore. It feels like there was a roadmap somewhere and somebody got their piece done way in advance of another piece and the rollout just doesn’t make sense right now.
[00:14:25] SY: You know, I liked that explanation. I like the idea that they are going to do end-to-end encryption and that really this is part of that plan and they probably didn’t think it was a big deal to kind of roll this out first. I don’t think they were expecting the backlash that they got. And I totally see that. I can see if that is what they’re doing, if that’s kind of the end of all this, I can totally see them waiting or delaying the CSCAM scanning until end-to-end encryption comes about and then they can kind of release that as one feature. And I think that would be much more palatable to research experts, to privacy experts, and to the public in general.
[00:15:03] JP: Yeah. At this point there, they just have to wait, right? You could have a 45-minute interview explaining the feature. But for people that aren’t paying attention in the tech sphere, if they were to continue with this feature, they’re going to get iOS 15 and they’re just going to be like, “Oh, I heard they’re scanning my photos now.” The end users are not going to go watch a 45-minute interview with an Apple exec explaining why this is a good idea.
[00:15:27] SY: No. No. Absolutely not. Coming up next, we talk about another controversial automated system this time in the form of resume filter tools after this.
[00:15:57] SY: Joining us is Lydia X. Z. Brown, Attorney, Disability Justice Activist, and Policy Counsel for the Center for Democracy & Technology. Thank you so much for being here.
[00:16:08] LB: Thank you so much, Saron. I’m glad to join you.
[00:16:11] SY: So tell us about your career background. You have a very impressive title, a lot of things going on. Tell us a little bit about yourself.
[00:16:18] LB: For more than 10 years, I’ve been an organizer, activist and advocate in spaces for disability justice and disability rights. My work has largely focused on issues of interpersonal and state violence that targets disabled people at the margins of the margins, particularly along intersections of disability with race, gender, sexuality, faith, language, and nation. I’ve done that work as a policy advocate, as adjunct faculty at two universities currently, and through a number of other arenas through community organizing and building that don’t have formal titles and are not necessarily attached to a formal organization. In my current work as a policy expert and advocate, I focus on the ways in which algorithmic injustice bias and discrimination harm, marginalize, and further disparities and disproportionate forms of violence, discrimination, and prejudice that impact disabled people. And one of those areas I know that we’re really excited to talk about today is the ways in which AI have discriminated against disabled people in hiring and employment processes. Our work also spans issues and questions of AI discrimination in public benefits as well as in surveillance and incarceration.
[00:17:35] JP: So there was recently a report by Accenture and the Harvard Business School that found that automated resume filter tools are excluding millions from jobs. You co-led a project that looked at algorithmic fairness and bias and hiring contexts. Can you talk about what some of your findings were?
[00:17:52] LB: In our work, we found that disabled people, for a variety of reasons, find that AI hiring tools can be sometimes outright inaccessible. And at other times, discriminatory because of their design, because of their purpose, and because of what it is that they’re actually evaluating about an applicant. And unfortunately, a lot of the time disabled job seekers don’t necessarily even know that they’re being evaluated by an AI or if they are aware that there is an AI-based tool in their job process, what that AI tool is actually measuring. For example, sometimes applicants will be asked to take a gamified assessment where they have to click certain dots at a rapid speed or choose between different images to represent different kinds of abstract ideas. And a person might not know that in the first type of assessment their reaction time is being measured, their risk-taking behavior is being measured, where their mouse is on the screen and what types of movements it’s making are being calculated and analyzed. Or in the second type of assessment, the assessment is actually attempting to measure characteristics of their personality, whether they are an optimistic person or not, what they think about themselves or how they think about themselves in relation to other people. And a person may have absolutely no idea that therefore their disability, that inhibits reaction time, that creates a processing delay that has a visual processing component to it that involves their attention and their focus is going to impact their performance in the first session. Or then in the second example that a person’s experience with bipolar, their experience with past psychosis or their experience of depression or suicidality might also impact what their responses will be and therefore how they might be scored. And people may not even know, even if they’re aware that they’re being assessed, right? Where that assessment goes? Who’s looking at it? What decisions that person is making or whether there is a person involved at all or whether it is simply a computerized assessment that is making that decision to begin with? And so what we know is that whenever you attempt to assess candidates based on a norm, which is what AI functionally has to do, it has to create a norm and measure people against whatever that norm is, whether for speed, reaction time or level of perceived optimism, a disabled person by definition is going to challenge that norm because the ways that our body minds exist do not conform to expectations of what a healthy, well, or functional person is supposed to be like, regardless of our ability to actually perform a job. What we have found, unfortunately, also and not surprising is that similar to the ways in which people experienced employment discrimination for other reasons of marginalization because of race, because of being trans, because of being an immigrant, it can be incredibly difficult to prove that discrimination has occurred. And when we’re talking about the world of algorithms, another barrier to that is that many algorithms that are used in AI-based tools for hiring exist in a black box space. We don’t know what the code is. We don’t know how it works. And developers and users of those tools might claim that that code is protected. It’s intellectual property. It’s a trade secret and they don’t want to disclose what that code is and they especially don’t want job seekers to know what it is because they don’t want a job seeker to game the system. But in reality, they’ve already rigged the system.
[00:21:19] SY: So how big is the neurodivergent population or people with disabilities population working in tech? Do we have a sense of how many people are being affected?
[00:21:30] LB: You know, I don’t know what the numbers are in the technology field specifically, but we do know that even according to US Census data, which itself is questionable and problematic, the US Census estimates that disabled people across all types of disabilities, including neurodivergence, make up at least about 20% of the US population. The actual number is probably higher because that census data does not account for many types of disabling experiences, especially disability experiences that might be tied to and connected to experiences with racism or with capitalist exploitation or with surviving trauma. The actual number of disabilities then if we account for the likelihood that that number is probably on the lower side, we know that it’s going to be higher. We don’t know what it is, but it’s higher than 20%. Now the technology field as a whole might not necessarily be representative of the general population, but even if it heaves a little bit close to what the overall population is, we can take that number as basically a placeholder that there’s probably at least around 20% of people of disabilities represented in the technology field. Now anecdotally, in neurodivergent community spaces, we often joke that certain types of technology companies and technology jobs are what we call a very special sheltered workshop for autistic and other neurodivergent people. And that’s a specific joke in our community, a sheltered workshop, legally means a place of employment where disabled people might be paid just cents per hour to do very menial work. And oftentimes people are placed into sheltered work kind of placement because of a presumption of incompetence that a person is believed to be incapable of doing “real work”, and therefore, they shouldn’t be paid the same real wages as other people. Of course, putting aside that current minimum wage is also abysmal and not a living wage, but it’s a separate conversation. But back to the question, we do represent some significant portion of the community overall and undoubtedly within the tech space too. We don’t know what the exact numbers are. And there’s a lot of reasons for that too. There’s a lot of racial and gender disparities and who can be accurately identified with different types of disabilities. But even beyond that, a lot of people might not feel comfortable claiming a disabled identity or disclosing that they are disabled, even if they do know, just because again of ableism in their workplace and in society. If people think of people with your kind of disability as “crazy” or “dumb”, then why would you want people to know that you have that disability?
[00:24:15] JP: So one of the things that struck me in the research was the sheer number of applications and resumes that companies are sifting through. And I can kind of understand the desire for tools that could help with that. Are there hiring tools that you would recommend or point our listeners to that might not be so exclusionary?
[00:24:36] LB: It’s really hard to say here’s a tool that works or here’s a tool that’s less bad because every one of them has significant flaws. But what I would point to instead is asking a different set of questions. What I wish employers would consider are what is this tool actually measuring, how is this tool measuring it, is what the tool is measuring actually directly job-related and necessary to perform the functions of that job, and to what extent is this tool’s assessment a deciding or determinative factor in what happens to a candidate. And I think the answers to those questions can help an employer figure out more carefully whether they should be using a particular tool at all or if they do choose to adopt a certain tool, how they’re using it, what is the context in which they’re using it. And I would caution too, just in this conversation, that sometimes an employer will say, especially if they’re well-intentioned, “Well, of course, there’s going to be a human review. We don’t allow the machine to just make decisions on its own.” But then the flaw is, as we know, people are people, and if a person believes this tool is relatively effective, if not close to perfect, perhaps allowing for a small margin of error, then why should I question what the machine says? If the machine gives an assessment that this candidate is likely to be successful at this job and that this other candidate is unlikely to be successful at the job, who am I to really question the machine? And so while the machine isn’t necessarily making a determinative assessment in that scenario, that a person still gets to decide whether to pass the candidate alone, the hiring manager in that scenario maybe likely to just defer to what the machine said as a matter of course, thereby rendering useless the hypothetical process of human review. So those are the questions that companies should really be asking themselves and they need to be really honest, right? Are their hiring managers going to ever feel like they can or should question the results of a computerized assessment or not? And if they think that their managers may not feel comfortable questioning the assessment or they just may decide not to, because it’s easier to go what the assessment says, then there is a great harm and an enormous risk in continuing to adopt that tool. And that could be both in legal liability if their tool is discriminatory against people who belong to protected classes, but it’s also a question of morality that they are considering adopting a tool that has a high likelihood of depriving people from marginalized communities from a fair and equal opportunity to be considered for work and in a society where our ability to obtain a job is directly correlated to our ability to obtain housing, food, and healthcare.
[00:27:36] SY: So there are some companies like Microsoft who have programs where they are specifically trying to target and hire people with autism. Is this the right solution? What are your thoughts on efforts like this?
[00:27:49] LB: They’re very well-intentioned, but they’re also deeply flawed. Those types of programs often end up capitalizing upon stereotypical ideas about what autistic or other disabled people are good at and don’t necessarily account for the full diversity of experiences that autistic and other disabled people have as to what skills we might actually possess or what types of work were able to do well or not. And unfortunately, many times people who design and implement those kinds of targeted hiring programs around disability still have a very charity mindset where they think they’re doing a public service, they are trying to help those poor unfortunate people who probably can’t get a job the normal way and probably can’t function in a workplace like a normal person. But we can exploit what skills we believe they may or may not actually have and benefit from those skills and say that we’re helping them and not necessarily consider us as on par with other employees. I have another question for people who run those programs, “How many of the autistic or other disabled people hired through those programs are ever promoted? How many get a raise beyond an average cost of living raise at the end of the year? How many are ever considered for advancement into management or senior positions? How many are ever mentored to go farther in their career? How many are ever working alongside coworkers who aren’t necessarily part of that program? And how many of their coworkers actually think of them as a full, complex human being instead of as a savant?”
[00:29:26] JP: Your project also looked at benefits discrimination and the use of AI in the context of risk and threat assessment. How do these things also exclude people with neurodivergencies or disabilities?
[00:29:38] LB: We specifically looked in our benefits work at the ways in which states have adopted AI-based systems to make determinations about who is eligible for benefits and of those people who are eligible, what kind of benefits they should receive in a dollar amount or in an hour’s allocation when we’re talking about care for health and disability-related needs. And we particularly focused on states that were using and are now using AI-based tools to determine eligibility and benefits levels for people in Medicaid-funded programs, particularly for home and community-based services and long-term supports and services. So by definition, people that are receiving funding through these programs are almost entirely as a whole disabled in one way or another. And I’ll also just pause for just a moment to acknowledge one of the things that you said when you asked that question. We don’t say people with neurodivergencies just because that doesn’t really make any sense. People who are neurodivergent, neurodivergent people is a way of describing people who have disabilities that affect how our brains work. We don’t say people of races that doesn’t really make sense. We say people who are racially privileged or racially marginalized, and we might mean white people and people of color, but we wouldn’t say people of races that just doesn’t really make sense. So we wouldn’t say people with neurodivergencies. It doesn’t really make any sense to say that, but neurodivergent people and other disabled people are, as a general rule, we are affected profoundly by any type of policies or practices around benefits allocation because we’re more likely to need access to those kinds of benefits. And in some cases, especially the Medicaid-funded services, the care that a person receives through Medicaid might be necessary to stay alive or to be able to keep living in a home of your own choosing in the community that you want to live in, rather than being forced into an institution where you don’t have control, where you are not able to have your autonomy or dignity respected, where you are essentially incarcerated under the auspices of providing care. And what we saw is that in every single state that adopted an algorithm-driven assessment or benefits, the percentage of people who lost access to their benefits or experienced a substantial cut in their approved benefits was astronomical. In one state, nearly half of people lost or experienced some loss in the levels of benefits that they had previously been approved for. And for people who their benefits include, assistance with food or medications or even being physically positioned for people who are not able to physically move their own body, that can be life-threatening. If you’re not having access to being fed, you don’t have access to food, you don’t have access to your medication, you don’t have access to being physically moved, if only to prevent bedsores, let alone to be able to engage in activities, have a fucking life, right? Like that is literally life-threatening and it also is destructive to people’s autonomy and the people’s dignity. And then when we’re looking at the ways in which AI is used in the risk assessment space too, oh, boy, we’ve got so much to unpack, right? We already know that risk and threat assessment processes are fraught for racial, gender, and religious profiling reasons that queer and trans use of color, that use of minoritized religious backgrounds, particularly Muslim youth and particularly Muslim youth from black, brown, and Arab communities are already more likely to be profiled as dangerous, to be written up for disciplinary reports in school, to be given longer and more severe forms of punishment, to be considered more criminally or morally responsible or culpable and less innocent. And you add to that disability profiling the fact that literally every time there’s a mass shooting or someone is recruited into a terrorist group, that there’s a whole media obsession with how crazy this person was, how unstable they are or how they have a developmental disability and they’re just easily influenced. And we’ve got a recipe for disaster. We already know just from some preliminary data in some school districts that not only are students of color and students from minoritized religious communities more likely to be targeted for risk and threat assessment processes, but so are students with disabilities. And those students that experienced more than one form of marginalization are undoubtedly at the greatest risk. I also have to add to that. I don’t know to what extent I’ve ever experienced directly the experience of a risk or threat assessment that is part of this kind of carceral way of thinking about people. But I know that when I was in high school, when I was falsely accused of planning a school shooting, that did not happen in isolation.
[00:34:22] SY: Wow! That sounds terrible. Do you want to speak a little more to that?
[00:34:27] LB: I mean, basically it was because of the result of racialized, gendered and disability-coded beliefs about what kids are allowed to be interested in and what kids, teachers, and other students think of as scary or violent. So when we talk about using AI-based tools in any of these spaces, we have to remember that machines never exist outside of social and cultural context. They are not neutral or objective and they never can be because the people who designed them, the people who adopt them, the people who use them and the people who act on their results of assessments and evaluations have their own biases and have their own internalized forms of oppression and are part of broader society and culture. You cannot separate machines from culture.
[00:35:18] SY: Thank you so much for joining us, Lydia.
[00:35:19] LB: Thank you so much. Again, it’s been a real pleasure and I hope to connect again soon.
[00:35:31] SY: Coming up next, we dive into a ProPublica investigation that found that encrypted messaging app, WhatsApp, is not quite as private as Facebook claims after this.
[00:35:52] SY: Joining us is Craig Silverman, Reporter at ProPublica. Thank you so much for being here.
[00:35:57] CS: Hey! Thanks for having me.
[00:35:59] JP: So recently, you and a couple of colleagues investigated WhatsApp and found that even though Facebook claims the encryption app that it acquired would be secure and privacy focused, that might not be the case. Can you talk about how you conducted this investigation?
[00:36:13] CS: Yeah. So WhatsApp, as you noted, was an app that was acquired by Facebook for a lot of money, billions and billions of dollars, back in 2014. And really since the moment of acquisition, people were very skeptical of Facebook’s motives, right? Because WhatsApp, when it was started by two former Yahoo engineers, it retained very little data about the people using it. They were in the process of implementing end-to-end encryption for your messages and they were actually extremely hostile to even the idea of digital advertising, the kind of advertising that Facebook has become one of the biggest, most valuable companies in the world based on. So it seemed like a really odd match. And certainly since the moment of acquisition, Facebook has taken a lot of decisions over time and the theme of them really has been at times increasing the amount of data being collected and shared with Facebook, finding ways of simultaneously implementing privacy enhancing features like end-to-end encryption and disappearing messages more recently, but also implementing systems and other means that can also at times erode people’s privacy. So for us, one of the initiating things for this was just being interested that WhatsApp is such a hugely, widely used app, more than two billion people and what is really happening in it. And then the second part is that we did manage to obtain a confidential whistleblower complaint that had been filed with the SEC where you have basically insiders saying that the systems that Facebook has built in order to kind of oversee WhatsApp are degrading the privacy that Facebook and WhatsApp really tout in all of their marketing language. So we had an interest in the app and obviously an interest in general in what Facebook is working on. And this whistleblower complaint really led us to focus in on and say, “Okay, so of the limited data that Facebook is collecting and the limited visibility that encryption does provide in terms of the messages, what is Facebook and WhatsApp looking at and how are they managing these systems?” And that sort of kicked off the investigation to look deeply at the kind of content moderation operation they have for WhatsApp, which they have scrupulously avoided talking about publicly, and also what can be done with the data that is there in WhatsApp, which is largely metadata. And again, for example, if there’s a lawful request from the US government, they can start to set a wiretap to see who is messaging who for example.
[00:38:39] SY: Tell me a little bit more about some of the major findings, especially some of the most alarming findings that you’ve found.
[00:38:47] CS: You know, to pick up that point on metadata, again, because WhatsApp does have end-to-end encryption, the messages being exchanged in transit, Facebook doesn’t see that, WhatsApp doesn’t see that, people outside of those participating in the conversations don’t see it. So that is there. Nothing that they do breaks encryption. But on the metadata, there is a lot that you can sort of understand and infer about relationships and people just from the data about conversations. And so WhatsApp does collect and analyze things like people’s profile photos, the names of groups. They also are able to see your IP address. And since 2016, they’ve been sharing data across what they call a sort of family of Facebook apps. And so there are analysts who can absolutely see that, “Oh, this person has an Instagram account, a Facebook. They also have a WhatsApp account.” And so there is a kind of universal profile within Facebook that they have access to. And on the metadata front, I think one of the most surprising things we saw is that there is this capability that WhatsApp has built in where they can turn on this ability to watch and keep logs of who is messaging whom and when. And that is not something that they do as a regular course of business. They have to turn that on at the request of law enforcement, what they call a lawful request. But what we found was that last week, a whistleblower named May Edwards, who had provided confidential Treasury Department’s documents to my former employer, I wasn’t involved in the story, BuzzFeed News, for a massive investigation, that was a Pulitzer finalist, into how dirty money flows into US banks. And one of the ways that the US government was able to determine that she had done this was because they got her WhatsApp metadata and they could see that she was communicating with a journalist at BuzzFeed News. So the jailing of a government whistleblower was impart aided by WhatsApp metadata. And that had not been really revealed before that one of the ways she was caught was because of this WhatsApp metadata. So that was one of the things that I certainly found really surprising when we determined that was the case.
[00:40:54] JP: Just to circle back on this. It’s not so much that content moderators or contractors are reading individual WhatsApp messages, but they’re looking at the metadata and potentially linking that with messages they can read on Instagram and Facebook attached to the same account?
[00:41:12] CS: So the moderation operation is another sort of important point you raised there. So the metadata part, Facebook, of course, has systems that are always scanning the stuff outside of messages. So it is always looking at profile photos, usernames, group names, and scanning those for signs of potentially violative material. Like for example, there are cases where people might actually use child sexually exploited material in a profile photo and WhatsApp will scan and potentially identify that and flag that automatically. So there is a constant scanning of sort of that metadata material. And then what happens is stuff can be surfaced by that scanning, which then goes to an army of about a thousand moderators around the world who will review it and decide whether it warrants an account ban, whether to continue watching the account or to do nothing, but there is a scenario under which these moderators can actually see message content from WhatsApp. And that is when a user chooses to file a report to WhatsApp. So there is a reporting function in the app where if you receive something that is, I don’t know, harassing material or objectionable material that you think violates WhatsApp policies, you can choose to report that. And what’s going to happen is it’s going to send that particular message that you are reporting, but it’s also going to send about four or five messages around it. So you get the five most recent messages from the conversation. So the one you’re reporting, as well as the ones between you and the other parties, are going to be then shipped off to WhatsApp. The specific number of five messages wasn’t known before. WhatsApp had sort of been vague saying, “Recent messages will also be shared.” They say this is their way of really balancing privacy and security. They’re saying, “We have user reporting so that if people encounter violative material, they can report it to us.” And they’re not scanning that material in the messages themselves, but they feel like that’s an effective balance. And then those messages get shipped off and they are looked at and analyzed by, again, this army of about a thousand reviewers positioned around the world who are supposed to have facility in a wide variety of languages, who are supposed to be familiar with WhatsApp’s policies and are supposed to be able to make judgements based on that content. And so yes, they can see information about a person’s WhatsApp account and how that person might be present on other platforms. And they could certainly go and look at the public content on that same person’s Facebook account and Instagram account, if they wanted to. And if content related to that person has been reported by them or someone else that they interact with, then they can also view some of that content as well. We go into great detail, sort of there are lots of false positives that get sent. People actually use reporting as a weapon. They’ll drop in something objectionable and then report somebody. And so there are ways that the system gets abused, which is inevitable, right? But one of the things to me that really stood out about this whole large infrastructure is that WhatsApp and Facebook actually refuse to even call it content moderation. Publicly, they’ve never acknowledged that they have all of these people reviewing these reports and the stuff surfaced by AI. When we spoke to them, they still refused to call it content moderation. And so I think one of the themes of the story is really just how the public messaging around WhatsApp and how it is positioned as this staunchly private app, we have no visibility, we don’t see anything. It’s totally private. Very little data is shared with Facebook. When you actually dig into the details, it’s much more nuanced than that. And I think that there’s a real gap between the public messaging and marketing and what is actually happening in the app. It’s not breaking encryption when somebody reports something. But I think Facebook, the way that they have represented it to Congress, to the public and in the marketing is just like, “This is a rock solid, totally private app.” And I think they have, in some ways, kind of exaggerated some of the benefits of encryption or at the very least emphasized encryption and avoided really detailing how they use all of this metadata and what can be done with that. And one of the ways that it’s really obscured in a concrete way is that for Instagram and Facebook, they publish these transparency reports of how many accounts they ban, how many requests they receive from law enforcement, for wiretaps or what have you. And WhatsApp is basically completely absent from these transparency reports, yet at the same time, WhatsApp is banning accounts all of the time for violating its policies. They have this data that they can be transparent about, but what we found from the reporting is that this data isn’t there because Facebook and WhatsApp, it’s almost like they don’t want to undermine the sort of the privacy messaging.
[00:45:56] JP: In comparison to Facebook and Instagram, are WhatsApp’s standards for what constitutes improper material similar? Do they differ? Do we know?
[00:46:07] CS: That’s a good question. So you can read the public community standards for Facebook. You can read those for Instagram. There’s also the company’s advertising policies and commerce policies. You can tell I’ve been focused on Facebook for a long time because I had to go through all these and know that they all exist. And with WhatsApp, it’s not the same level of disclosure about that. Certainly, there is overlap, like if you are using an account, for example, spread child sexual abuse material that is absolutely not allowed on any of Facebook’s platforms and not only will you get banned, but that material in your account we know will be filed away to NCMEC, the sort of national center that sort of tracks child abuse online, and then often can get referred to law enforcement. So there are policies that certainly cut across. There are policies against impersonation. There are policies against harassment. We saw that WhatsApp has actually a specific queue for those moderators dedicated to sexploitation where people will message with someone and get them to send compromising or nude photos of themselves and then use those to blackmail them. So that’s an entire category that moderators on WhatsApp are looking at it and it is something that they look at in other places. But the WhatsApp community standards aren’t really out there to read and pour through it again is somewhat obscure compared to the other platforms. Facebook’s and WhatsApp’s response to this is just say like, “Well, WhatsApp is a very different platform. It has encrypted content and it’s not the same as these more public facing platforms like Facebook and Instagram.” But again, if they have these policies, if they’re enforcing against them, if they are banning accounts and taking other action, then the same transparency level should be met there. And as of right now, it’s not.
[00:47:52] SY: So speaking of CSAM, how does WhatsApp approach to detecting child abuse images compared to Apple? Apple has been in the news and there’s been a huge backlash and they’ve actually halted plans, at least for now, to introduce that automated system in their devices that scans photos. Given that WhatsApp is putting such a big emphasis on encryption and we can’t see your data and all that, how does WhatsApp approach that problem?
[00:48:20] CS: Yeah. So they are certainly much more active when it comes to CSAM detection than Apple and certainly it should also be said that than Signal. Signal, which of course has some of the former WhatsApp executives now working at the Signal Foundation and working on that app, they say they collect zero metadata. They have nothing to give over to law enforcement, except I believe when an account was created. And so they do no scanning. They do no proactive detection on CSAM. And when it comes to Apple, you mentioned, they recently came out with an idea, a proposal of what they were going to start doing, but now they’ve hit pause on it because there was a lot of blowback that it was a process that could be abused by authoritarian countries to not just have them scanning for CSAM, but also have them scanning for censorship terms or things like that. And so WhatsApp is very proud, frankly, of the way that they are dealing with things like CSAM and abuse because they believe that they have kind of pioneered the right way for an end-to-end encrypted platform to deal with these issues. And so for them, what they say is, “Well, we take everything that is not encrypted like profile photos and usernames and that kind of thing and we are proactively scanning using these sort of matching technology of known CSAM material to see if it exists there.” And then if they get a match, they validate it and then they will share it with NCMEC who then could also share it with law enforcement. And so they have a much more kind of robust, proactive scanning program than Signal or Apple do. They are praised by NCMEC for at least doing something on WhatsApp. And so they are different in that sense and they are, again, quite proud that they are reporting stuff to NCMEC. And those are some numbers that they do quote of sort of how many things they’re reporting to NCMEC. And so it is a differentiator and they get praised from some people for that. Although there are also organizations that wish Facebook and WhatsApp were doing more. You’ll always sort of have this tension there between what law enforcement and child advocates want and what privacy advocates and other people want.
[00:50:23] JP: So there’s been reporting in the past about what Facebook content moderators go through in terms of their work and psychological impact. And you and your team spoke to a bunch of current and former WhatsApp moderators. So I’m wondering if you can tell us about these people and what it looks like to be in the moderator positions at WhatsApp.
[00:50:40] CSA: It’s a scenario where you’re not actually working for WhatsApp. You are a contractor. Typically, they’re employed by Accenture, which is a massive global tech consulting firm that provides most of these types of moderator positions to Facebook now. A lot of these people, they’re in their 20s and 30s. They don’t necessarily come from technical backgrounds and they’re hired and employed by Accenture and the pay starts around $16.50 an hour. And there’s a lot of emphasis on volume here. So let’s remember that WhatsApp has more than two billion users. And so you have the AI scanning the sort of non-message content to surface stuff that goes in queues for them to look at and then you have WhatsApp users themselves reporting things, sending those messages off that they also have to look at. So typically, these folks are asked to clear about 600 tickets or so a day, gives them less than a minute to kind of look at it. And this is consistent really for a lot of the content review work, not just at WhatsApp. So they got less than a minute per ticket. They have to make a determination. They have to, in their head, have all of the policies and rules of what’s allowed and not allowed on WhatsApp and they have to move really, really quickly. I’ve spoken to people, not just sort of in the WhatsApp world, but in moderation positions for different Facebook products working for Accenture. And at the end of the day, what they say is that Accenture really just wants to meet the terms of the contract and the terms of the contract go down to things like number of tickets per person being cleared per day, per month. And so you’ll have this tension where if they’re falling behind in the queues, Accenture managers will be like, “All right, everybody, we need to step it up. We need you to move a little bit faster.” And then people will move faster. And then like a few weeks later, the Accenture managers will come down and say, “All right, listen, we’re having a lot of problems with quality control here,” because they’ll review tickets that people have done to see if they made the right decision. And so inevitably, they tell them to speed up, the quality degrades, and then they start telling them to slow down again and it’s just like the seesaw effect that I’ve heard moderators explained to me of just always trying to hit the right metrics to meet the terms of the contract. And for them, it’s really repetitive work, but also there can be serious health consequences for them. We talked about CSAM here and these are people who are going to be encountering it. They’re going to be seeing it. And that is absolutely a traumatic thing. They’re going to be seeing extremely violent content. And so this can have a huge effect on their mental and physical health, which is something that’s obviously been documented in other types of moderation work as well.
[00:53:13] JP: One thing that we’ve kind of alluded to is that WhatsApp is extremely popular, not only in the US but overseas as well. I’m wondering if there’s any concern that in addition to monitoring for CSAM materials, there could be pressure from foreign governments to monitor for images or messages that they deem improper, like things from activists and dissidents.
[00:53:36] CS: Yeah. This goes back to kind of the issue with Apple, right? A lot of people were saying, “Well, you’re going to create this sort of scannable infrastructure. And what if the Chinese government said any images of the tank in Tiananmen Square need to be flagged to us, right? And that risk exists with kind of the WhatsApp system. They say that, “Look, our AI is scanning for very specific things. We control that.” But it of course opens the door to whatever inputs whoever is controlling the system and decides what goes into it to be added. And so, yes, is it conceivable that there may be a scenario where they are pressured by governments to scan for certain things? That is, I think, always a risk when you open the door and do this kind of work. WhatsApp doesn’t really talk about any of those things. I think they’ve sort of never acknowledged having done that. But I think what we’ve heard is that there are certainly governments around the world where, for example, that kind of wiretap capability of seeing who is messaging whom, that WhatsApp treats different lawful requests from different governments in different ways. So like the US, they’re going to respond to a court order to implement this, but there are probably other countries in the world where a court order to them is going to be treated with far more skepticism and where they are going to be inclined to push back. This comes down to, I think, one of the key things being raised in the story, which is the element of a lack of transparency with WhatsApp, because they are not giving detailed statistics broken down about the number of requests they’re getting by country, the nature of those requests, we have no idea what ones they’re receiving, which ones they’re accepting and which ones they’re rejecting. And if we had that level of transparency, maybe that would increase the level of trust and confidence that people have and that WhatsApp is making the right decisions with this.
[00:55:22] SY: Is there anything else you’d like to add that we haven’t covered yet?
[00:55:25] CS: The last thing is, the thing to pay attention to moving forward is that WhatsApp is very much on a push to monetize. It never really made any money. Facebook paid billions of dollars to get it because it was a threat. It was becoming very popular, but now they absolutely need to earn money from it. And so they’ve rolled out some things. There are API services you can pay for to do customer service, messaging with your customers and pay for that by message volume. They’re enabling people to set up e-commerce stores in WhatsApp. And so I think the thing to pay attention to is since the acquisition of WhatsApp, we have seen them take more data, and in some ways reduce the level of privacy while they add privacy enhancing features. What is the introduction of these kinds of efforts to make money going to do? One of the things that we cite in the article is an internal WhatsApp marketing document that we obtained where they talk about kind of privacy being their key messaging, but they talk about opening the brand aperture, which is very marketing speak, to talk about monetization. And so this is what is going to be the tension, I think, with WhatsApp going forward is making as much money as they can out of it, maintaining their privacy messaging and a big question is, “Are they going to be more transparent about the things we’ve highlighted here or are they still going to continue to try and obfuscate and sidestep those things?”
[00:56:47] SY: Well, thank you, Craig, so much for joining us.
[00:56:49] CS: Yeah. Thanks for having me.
[00:57:00] SY: Thank you for listening to DevNews. This show is produced and mixed by Levi Sharpe. Editorial oversight is provided by Peter Frank, Ben Halpern, and Jess Lee. Our theme music is by Dan Powell. If you have any questions or comments, dial into our Google Voice at +1 (929) 500-1513 or email us at [email protected] Please rate and subscribe to this show wherever you get your podcasts.