Season 7 Episode 8 Mar 3, 2022

Battling Russian Disinformation, Big Tech Lends a Hand to Ukraine, and IBM's Persistent Ageism Problem

Pitch

We're not taking the BS this time around.

Description

In this episode, we talk about various ways in which big tech has lent a hand to Ukraine in their war with Russia. Then we talk about Russian disinformation efforts with Dr. Jeffery Blevins, professor in the Journalism Department at the University of Cincinnati and co-author of the book, Social Media, Social Justice and the Political Economy of Online Networks. Then we talk about leaked IBM documents that show the company’s persistent ageism problem with Peter Gosselin, former investigative reporter at ProPublica who co-penned the piece, Cutting ‘Old Heads’ at IBM.

Hosts

Saron Yitbarek

Disco - Founder

Saron Yitbarek is the founder of Disco, host of the CodeNewbie podcast, and co-host of the base.cs podcast.

Josh Puetz

Forem - Principal Engineer

Josh Puetz is Principal Software Engineer at Forem.

Guests

Dr. Jeffrey Blevins

The University of Cincinnati - Professor

Dr. Jeffrey Layne Blevins is a Professor in the Department of Journalism at the University of Cincinnati. His recent book (with Dr. James Jaehoon Lee), "Social Media, Social Justice and the Political Economy of Online Networks" (University of Cincinnati Press, 2022).

Peter Gosselin

ProPublica - Former Contributing Reporter

Peter Gosselin was a contributing reporter at ProPublica covering aging. In more than three decades as a journalist, he has covered the U.S. and global economies for, among others, the Los Angeles Times and The Boston Globe, focusing on the lived experiences of working people. He is the author of “High Wire: The Precarious Financial Lives of American Families,” for which he devised new data techniques to show that economic risks were being shifted from the broad shoulders of business and government to the backs of working households. In addition to reporting, he has been a visiting fellow at the Urban Institute in Washington, chief speechwriter to the treasury secretary and an economic adviser to the original Department of Health and Human Services team implementing the Affordable Care Act.

Show Notes

Audio file size

42089189

Duration

00:43:51

Transcript

[00:00:10] SY: Welcome to DevNews, the news show for developers by developers, where we cover the latest in the world of tech. I’m Saron Yitbarek, Founder of Disco.

 

[00:00:19] JP: And I’m Josh Puetz, Principal Engineer at Forem.

 

[00:00:22] SY: This week, we’re talking about various ways in which Big Tech has led a hand to Ukraine in their war with Russia.

 

[00:00:28] JP: Then we’ll talk about Russian disinformation efforts with Dr. Jeffrey Blevins, Professor in the Journalism Department at the University of Cincinnati and co-author of the book, Social Media, Social Justice and the Political Economy of Online Networks.

 

[00:00:42] JB: Simply having accurate information out there doesn’t matter as much. It’s what people are likely to believe, what are they primed to believe.

 

[00:00:52] SY: Then we talk about leaked IBM documents that show the company’s persistent ageism problem with Peter Gosselin, Former Investigative Reporter at ProPublica, who co-penned the piece Cutting ‘Old Heads’ at IBM.

 

[00:01:05] PG: Even as the company was laying off people, we could see into the system and see that these people were highly rated by the company itself on their technical expertise in various areas.

 

[00:01:23] SY: So the world has definitely become a scarier place in the past couple of years. First with the pandemic, which has killed millions worldwide and still continues, and now coupled on top of that, a war in Ukraine, which began on February 24th. It was an unprovoked invasion by Russia and is the largest attack in Europe since World War II. And because things can feel so terrible and overwhelming and bad, we wanted to take a moment to shine a spotlight on some of the better news that has come out, mainly how Big Tech seems to have banded together to lend a hand to Ukraine. So first off, we have Airbnb, which is offering up to a hundred thousand Ukrainian refugees free short-term housing. The housing is being funded by the company as well as donors and hosts on the platform. They’re also working with different European countries to work out deals for long-term stays. Airbnb had also helped resettle 21,000 refugees last year after the US pullout of Afghanistan and Taliban takeover. As of March 1st, the UN Refugee Agency said that there were about a total 677,000 Ukrainian refugees thus far. So definitely a lot more than 100,000, but it’s something.

 

[00:02:37] JP: Cryptocurrency has also provided a quick way for investors to donate money to the Ukrainian government. After several tweets from Ukrainian government accounts, asking for donations and listing Bitcoin, Ether, and Tether Wallet to donate to, Ukraine received over $22 million worth of crypto donations as of Monday, according to blockchain tracking firm Elliptic. The famous Russian activist group Pussy Riot also helped create a decentralized autonomous organization, which has helped raise $3 million of the funds. And outside of currency, another way Big Tech is trying to help is through communication. One of the most interesting moves made was by Meta, parent company of Facebook. The company’s president of global affairs, Nick Clegg, announced Tuesday that they are allowing Instagram users in Russia and Ukraine the ability to switch on encrypted messaging in the apps.

 

[00:03:25] SY: Very cool! And as the Russian invasion of Ukraine ramped up to an all-out war, so too did Russian disinformation efforts ramp up across the internet. But unlike the Russian disinformation campaigns that were found to have impacted the 2016 US Presidential Election, these efforts might not be so effective this time around. Not only is there a large worldwide general consensus of this conflict being more black and white, it seems that tech companies are taking this particular disinformation more seriously. In the past week, Apple, Meta, YouTube, and TikTok have shut down Russian-state run media RT and Sputnik on their sites in the European Union. And Reddit has even quarantined the r/Russia subreddit due to misinformation, meaning that unless a user has specifically added this subreddit, it won’t show up in their searches, recommendations, or feeds. Coming up next, we dig deeper into Russian disinformation campaigns whether these efforts by tech companies will have a significant impact on them and why tech companies seem to care more about disinformation this time around after this.

 

[MUSIC BREAK]

 

[00:04:53] SY: Here with us is Jeffrey Blevins, Professor in the Journalism Department at the University of Cincinnati and co-author of the book, Social Media, Social Justice and the Political Economy of Online Networks. Thank you so much for joining us.

 

[00:05:07] JB: Hey! Thank you for having me. I appreciate it.

 

[00:05:09] SY: So let’s start off by talking about your research background and your book, which talks a lot about Big Tech’s current issue with disinformation. Tell us a little bit about it.

 

[00:05:19] JB: I’ve been studying disinformation, misinformation, fake news for the past six years now, really became interested in this in the 2016 election cycle when we all became introduced to the term “fake news”. And in some respect, I feel like that term has outlived its usefulness. I feel like what we’re dealing with now is more misinformation, disinformation, and I hope it’s okay that I say this, but quite frankly what we would call “bullshit”. And we had begun our study actually going a little bit further back. We were interested in how social justice groups could use social media. This technology seemed to be very liberating and allowed people to tell their own stories on their own terms. It allowed people who were involved in social justice activities to circumvent gatekeepers. Gatekeepers meaning traditional news media and this provided a real lively type of discourse that we hadn’t seen before. And at this point, it all seemed well and good until fake news came about and we had different actors that were imitating, pretending to be social justice groups. So for instance, the largest BLM, Black Lives Matter Group on Facebook for a while was actually a fake account that was created by a group that had no relationship to BLM and they were using it to provide all sorts of kind of inflammatory messages.

 

[00:07:07] JP: So I think we’re all familiar with the concept of bullshit, but let’s talk about the difference between misinformation and disinformation. Is there a difference? I'm seeing those two terms used in media reports and I’m curious, is there a difference between them or could they be used interchangeably?

 

[00:07:28] JB: Well, they tend to be used interchangeably and being an academic, we have to problematize everything. And so I don’t think that they should be used interchangeably because they really connote different things. Disinformation tends to imply an unintentionality and the production of false information that I’m intending to deceive you. And so really this might also be more akin to propaganda, but it’s still not quite the same thing. Propaganda is meant to persuade you and it may have some half-truths, that kind of thing. But disinformation is an intentional production of false information. Misinformation, on the other hand, connotes really kind of the unwitting spread of false information. So if I have been misled, if I have been exposed to something that is untrue, lacks context, and then I spread that, well, then I’m misinformed, but I may not know that. And these are statements that are offered without any regard for truth or falsity. Truth or falsity is irrelevant to the person who is making the statement. It’s what is easy for me to say and convenient for you to believe for me to make my point. And I think in our cultural politics today, that’s the real problem is that people already have a preconceived notion about what they want to be true and they go and they look for information that supports that preconception and they are more interested in finding information that supports them and winning their arguments.

 

[00:09:14] SY: So let’s get some context. How has disinformation evolved over time, especially over the last six years since you first started really studying this, particularly when it comes to Russian disinformation?

 

[00:09:28] JB: In the 2016 election cycle is when we really became aware of fake news that we had some of the most egregious examples of that, and it was really quite effective then simply because we weren’t expecting it. We didn’t really have a frame of reference for how it would spread on social media. We were witnessing that and experiencing it at the same time. And so we didn’t have the analytic distance that we do now. And since that time, we’ve learned to spot, to recognize some of the red flags. So for instance, if someone who on their Twitter account reports to be a school teacher in a small town in Illinois and they have 40 to 50,000 followers, that’s a red flag. That’s inordinate, right? We’ve become better at being able to track where accounts originate from. It’s a bit easier to get ahold of Twitter data. For instance, Facebook tends to be a little more protective, but we’re also aware that, hey, there are actors out there that are pretending to be something that they’re not. Also, I feel like the claims have become less outrageous. So again, thinking back to that 2016 timeframe, there were claims about satanic Democrats that were eating babies in the basement of a pizza parlor that actually didn’t have a basement, right? And that had a certain amount of livelihood. And now people that it might’ve made an impression on them earlier it wouldn’t now. So I think certainly, we look at some of Putin’s claims that Ukraine is being run by neo-Nazis just doesn’t get the same type of traction that it would have say six years ago. Now the flip side of that is that these Russia campaigns have become more sophisticated. I already mentioned the example of what we saw with the BLM Facebook page. There were other examples in Minnesota when the George Floyd protests were going on, but there’s this AI-generated picture of this person who pretends to be a Ukrainian who is pro-Russia, and there are several other of these accounts that have been spotted as well. That’s the type of insidious stuff that I feel like would have been more effective six years ago and it still might work for some. But gauging from the international response that Twitter has become much better at policing this kind of stuff, I think it’s harder for it to get traction these days.

 

[00:12:17] JP: What kinds of online disinformation efforts seem to be most effective?

 

[00:12:23] JB: Fake news used to be effective. One of the classic examples I like to use was a Twitter post in 2016. The headline was that Pope Francis shocked the world and endorsed Donald Trump for the presidency. Now for many people, I don’t think that if you’re a Catholic, that should be a red flag, that the Vatican is typically not in the business of endorsing candidates for president. Right? But what was really clever about this particular tweet is it had call letters like we have for our broadcast stations in the US. So it looked like it was from a local television station and there’ve been several polls that television, local television news tends to be what people perceive as the most reliable. So that could be very effective. Now I happen to be a nerd. And so I know how to look up stations on FCC’s website, there’s a database, and there was no such television station or radio station with those call letters at the time. So that was, I think, really effective then. What I think has become more effective now, though, is people posting their opinions and re-tweeting things. And so then it’s not so much about the original source as it is about the person who is re-tweeting it and are they an influencer or not. And I think this is where people’s confirmation biases tend to come in as well. So if a celebrity, a popular political figure, a popular political pundit retweets something, that is what’s most likely to give it life and to make it more believable for people who already kind of fit that political dynamic. So a really good example of this is in a separate study that I did was one of my co-authors from the book and a couple others from the Digital Scholarship Center at the University of Cincinnati, we looked at particular spread of misinformation about hydroxychloroquine in March of 2020, which seems like an eternity ago, right? That there was all this misinformation about, “Oh, this was an effective treatment for COVID-19,” and that really had not been validated then. Now the source of this claim actually originated from just a few QAnon-related accounts. And what was interesting is that the misinformation didn’t spread just from those accounts. It’s when what we termed “bridge actors” picked up. So someone like President Trump, when he retweeted those claims, well, then it had more credibility behind it because he was President of the United States. And we also saw that prescriptions for hydroxychloroquine also went up at this time. People who had never had it before or doctors who wouldn’t have had not prescribed it before were doing it. And so even though he did not originate those claims, he was a significant bridge actor, so were several other celebrities and pundits in that case as well.

 

[00:15:50] JP: We touched briefly upon Russia’s disinformation efforts in the 2016 Presidential Election and they’ve been active with anti-Ukrainian propaganda and disinformation surrounding the invasion. Do you think their efforts in this current situation are as effective as they were in the presidential election?

 

[00:16:14] JB: Really, I hate to speak in absolutes, but I just, frankly, don’t feel like it has been effective overall. Just take for instance what they did in the US. This was a long-term strategy. Right? And it wasn’t just about the 2016 election. Once that had passed, then it was propaganda, some fake news, imitating, mimicking accounts and creating a lot of divisive content about hot button cultural and political issues in the US, say around immigration, around social justice, around police activity and policing. And so there’s been a steady campaign over several years and across several hot button issues. Whereas in Ukraine, it doesn’t appear that they’ve had that kind of long sustained effort that was like, “Oh, we’ll start doing it now that the invasion has begun,” but it’s simply too little and too late. Now there might have been a campaign that was going on within Ukraine itself that preceded it. But given the response that we are seeing from Ukrainians overall, it just doesn’t seem as that it was as effective that they seem certainly more united than we do in the US.

 

[00:17:45] SY: So one of the big fights, battles, I’d say, over the last six years has been trying to get social media sites to take some accountability for the perpetuation of disinformation and misinformation. And it feels like that hasn’t really happened. It feels like they’ve been able to get around being held accountable. How are they able to do that?

 

[00:18:11] JB: Section 230 of the Communications Decency Act. And this is often regarded as the passage that created the internet, but the idea was that interactive computer service operators, and this is a broad term and the courts have applied it broadly, applies to social media companies, but it applies to also say Amazon, for instance, has a comment section or customer review section. Anyone who posts a comment there, Amazon is not responsible for it, the third party that posted the comment is. But we’ve seen it a lot in social media. And so legally, social media companies have no obligation to check the veracity of claims that are being made on their network. Some do voluntarily. And what’s interesting to me is that when they have started to do that, then we had people who were then claiming that this was censorship. And as a journalist and professor and someone who teaches media law just kind of grates on me because at least in the US, private companies, private spaces don’t censor anything. The First Amendment is a limitation on the government’s ability to restrict speech. And so if it’s not the government that’s doing it, it’s fair game. Your freedom is to go to another network, if you will, if you don’t like that they edit out certain content or they block certain people from their services. And I think another way to look at it too is those social media companies, they also have a First Amendment right of association. It’s within their limits to delete someone’s account, to ban them from their network. They’re responsible for their own terms of service. I think that there is really no good answer either way. We have social media companies that are doing it now, and it’s really not enough because it happens after the fact. Right? So the misinformation gets out there and it spreads. They become aware of it. They delete it. Maybe they take down the account and that can just simply pop up someplace else. Right? But also it’s likely to have spread by then. I feel like looking for legal solutions or policy remedies perhaps isn’t the best way to go. But what we really need is to be more media literate and media savvy that as a media consumer, as a social media consumer, perhaps we should take more ownership of the information that we consume. We have to be open to caring about what is true, what is likely to be false, what is a specious source of information, what should be questioned and that there is a value to maybe being wrong sometimes.

 

[00:21:27] SY: So a lot of tech companies have removed Russian state media from their sites and Reddit even quarantined a Russian subreddit that they said it was perpetuating misinformation. Why do you think the tech industry is taking the issue of disinformation more seriously this time than during the 2016 presidential cycle?

 

[00:21:47] JB: I think because the stakes are so much higher, not to put too fine a point of it, but I think we’re all kind of feeling that we might be on the edge of World War III here. But even if I tone down the hyperbole, we’re talking about armed conflict. We’re talking about civilians, children who are getting killed. It seems, and I would say is, more important, more impressing than an election. And maybe also those companies have learned their lesson. I don’t think many of us took it as seriously as we should have in 2016, but how could we know? In media law, we always tend to rely on this notion of the marketplace of ideas that, “Oh, we don’t censor anything and the way we combat misinformation and falsity is with truth and more information.” And we see that that marketplace of ideas metaphor has really kind of outlived its usefulness that people don’t always consume the truth even if there is more of it than falsity that’s out there. There is a study that just came out of I believe it’s George Washington University. And they had looked at the whole corpus of tweets and information and misinformation about hydroxychloroquine and other treatments for COVID-19. And what they found is that there was more accurate, more credible information than there was inaccurate and incredible information. Yet, we still had the misinformation spread. We still had people who were taking hydroxychloroquine, ivermectin, and then of course there was the latest one was people drinking their own urine. So simply having accurate information out there doesn’t matter as much. It’s what people are likely to believe, what are they primed to believe. So to bring it back to a current situation, I think the risk is too high to allow Russian state propaganda, to allow it access to these platforms at this time. It’s just too dangerous.

 

[00:24:15] JP: Do you think that some of the moderation measures that tech companies are taking we’re seeing tech companies limit access or provide encrypted communication or throttle their access in the areas affected by the Ukrainian invasion? Do you think those measures are going far enough? Or is there more that the social media tech companies could be doing to really mitigate disinformation?

 

[00:24:41] JB: They’re probably doing as much as they can. We take throttling for instance that’s certainly not a foolproof, but it’s better than nothing. Same thing with encryption. Clearly, if you can keep them out of the marketplace of ideas altogether, that would be the best. But that’s also I think an impossible task if you have agents working outside of Russians orders, it’s going to be impossible to detect immediately in all cases. But it certainly helps. It makes it more difficult to get that propaganda out there.

 

[00:25:20] SY: Is there anything that we as developers or just regular everyday people can do to mitigate some of the growing issues of the online spread of misinformation?

 

[00:25:31] JB: It’s a really good question because I know there’s been a lot of talk about trying to create algorithms that could detect misinformation, those kinds of things. And algorithms, they can be really good. They can be really accurate, but they’re not foolproof. And a lot of times they miss context. And there’s such a human element to it that I don’t feel like that is the way we should be thinking either. I don’t know what developers can do on the side of promoting media literacy and information literacy. I feel like certainly education has a role to play, but it also has to start much sooner than say college or even high school. Once you’re old enough to have that app in your hand, you should be versed in what is credible information, you should be versed in what is persuasion, those types of things. I mean, you think what we learned in grade school, mathematics, English language, stuff that you use every day. Well, we spend about a third of our waking hours involved with some form of media, apps, social media, you name it. I really feel like the way to address that is in those curricula and not just about, “Oh, how do we create and use these technologies or these apps?” But what do they mean and how are others trying to use these tools to manipulate or persuade?

 

[00:27:16] SY: Well, thank you so much, Jeffrey, for being on the show.

 

[00:27:19] JB: Absolutely.

 

[00:27:25] SY: Coming up next, we talked about newly released documents that show a persistent ageism problem at IBM after this.

 

[MUSIC BREAK]]

 

[00:27:49] SY: Here with us is Peter Gosselin, Former Investigative Reporter who co-authored the ProPublica piece, “Cutting ‘Old Heads’ at IBM”. Thank you so much for joining us.

 

[00:27:59] PG: Thanks for having me.

 

[00:28:00] SY: So you did a huge investigative piece on ageism in tech and specifically looking at IBM. What were some of the initial patterns and things you heard when you started digging in and interviewing folks who reached out to you?

 

[00:28:14] PG: Well, we got to this story by simply sending out a one pager on my own experience. I got laid off when I was 63. As it turns out in the very week, my twins started college, not a great time to be laid off. We sort of told this story and said, “Folks, if anybody’s got a story about age and work, let us know.” And we got hundreds and hundreds of responses and there was a big cluster around IBM. And so we did another one of these things and said, “Hey, we see a lot of people talking to us who said they’ve had bad experiences with IBM. What’s the situation?” And then we did get thousands of responses that turned out to have been very important for doing the story because IBM and most large corporations are incredibly complicated and there are lots of stovepipes. And during the course of doing the story, they kept saying to us, “Well, that doesn’t apply here,” or, “That doesn’t apply everywhere in the company.” And by having so many people talking to us across so many stovepipes, we were able to see patterns that management kept saying weren’t there.

 

[00:29:25] JP: Can you talk about some of the documents and the data that you found that supported your investigations at the time?

 

[00:29:31] PG: We had really deep stuff. We saw huge spreadsheets with thousands of people on them, ratings and systems for decision-making and so on and so forth. And we saw them across many divisions. We could see patterns across many divisions. These were documents many of which were produced as part of this effort to essentially offload older IBM workers.

 

[00:30:00] JP: Why did IBM want to get rid of their older employees? I think a lot of times we hear about companies getting rid of their older employees as a desire to avoid paying out pensions and retirements. Was that a factor here? Was it that they felt pressured to have a younger workforce to compete in the market? I’m kind of curious if you had discovered any of the reasons or any of the documents went into the reasons about why IBM was pursuing this course.

 

[00:30:26] PG: Older workers traditionally came with more expense. People have been with the company a long time. They’ve gone up the pay scale. They have pensions. They get sick, so they have higher healthcare costs. IBM had done a lot to trim back a lot of the benefit issues. IBM settled in 2004 a huge suit that they decimated the pension system. Basically, it said that after a certain time, sometime in the ’90s, any new hires would only have a 401K and pensions were removed. So a lot of that was off the table. But by the time we got to the period that we were really looking at, which was basically the five years before 2018, when we ran the story, when I was in my twenties, the hottest company in the world to work for was IBM. And it basically owned the mainframe, world, right? That had been changing on it utterly and the company had a near death experience in the early ’90s. It laid off huge numbers of people and it’s really interesting to talk to the folks who either survived that period at IBM or hired since. There’s really widespread agreement. IBM needed to do that. It was near bankruptcy and it needed to do that. And people went along with that. What happened in the period of the middle of the 2000s to when we wrote was that IBM kept trying different strategies to change its business model, that so-called CAMMS strategy: Cloud, Analytics, Mobile, Social, and Security. And it seemed to think that it needed a younger workforce to succeed in these areas. Now one of the things that I guess I’ve wondered about is that it was pursuing this strategy of these new areas, cloud and analytics, and so forth, for a fair amount of the period that we were looking at internal performance ratings by people. And they were rating the older people they were rating very well in pursuing this very strategy and yet they laid them off. And one of the things that made the IBM’s mainframes back then in the ’70s and the ’80s so powerful is they could build these wicked fast chips. IBM basically heaved overboard. Their chip operation is now a separate company called GlobalFoundries. And I guess I wonder whether it turns out just as the moment they were thinking they had to get rid of all these people. The centerpiece of their business from the ’70s came back alive again. It would be a severe irony here whether that’s true. And they basically depopulated the workforce in that area.

 

[00:33:07] SY: So the documents seem to show a very methodical approach to how the company wanted to push out its older employees. Can you talk about some of the main methods IBM would accomplish this?

 

[00:33:18] PG: There were several we spotted across a lot of divisions. The law requires public disclosure if you do a mass layoff. I think it’s maybe over 200 or over X percent of a business unit. So IBM had a huge interest in keeping its outflow of people within different divisions and different units below these disclosure numbers. Right? So one of the things they did is that they would bring in a group of people who have now qualified for early draw on their 401Ks or had pensions. And they say, “You’re retiring.” Folks would say, “No, we’re not. You’re laying us off.” And they said, “No, you’re retiring.” And the company would record it as a retirement. Now one of the things that did is that made the people thus dumped from the company ineligible for unemployment benefits. They hid the layoff that way. They set up a point system for rating people, not their management or their technical accomplishments. They had a separate rating system for that, but a point system for rating them as eligible or less eligible or protected from layoff. And the point system we described, it was on its face biased against older people. I mean, other things, not holding a job for a long time, got you a point, holding a job, a position for a long time, lost you a point. They do this elaborate tallying up people’s numbers and would pick the ones with lo and behold turned out to be older workers who did not have a lot of points. Another thing they would do is that IBM had a lot of its workforce remote, at least in the United States. And so people were far-flung, right? IBM started saying, “Well, you got to come to the office,” and by the way, you have to come to an office 3,000 miles away. So saying you’re in your mid-50s, your kids are about to graduate high school or whatever and you’ve got to pull up stakes and move. And they knew and these most recent internal documents show they knew that the typical take-up of workers over 40 for relocation is like eight percent. So I mean, it was effectively a way of forcing people out. But again, it wasn’t a layoff. If you quit because you wouldn’t take a relocation order, that was a voluntary quit. Right? It was a technique for avoiding again making your layoff numbers look lower.

 

[00:35:42] SY: And did IBM respond to these allegations of age discrimination at the time?

 

[00:35:46] PG: Basically, they said they don’t discriminate on the basis of age and they are good actors on all fronts, but they also took the position and responding to the initial story that we had to show them the documents that we saw before they would comment, and that would have been breaking faith with the people who helped us do the story. In fact, IBM has taken very much the same stance responding to the new documents too.

 

[00:36:14] JP: Let’s talk about those new documents. You co-authored this piece back in 2018, and now in 2022, there were some new documents that have been made public by a federal district court. Can you talk about what new things have come out of the new documents that have surfaced?

 

[00:36:30] PG: IBM built up an extraordinary legal armor around this process. And so a lot of people who are forced into private, confidential arbitration, that means if they had a beef about getting laid off, they had to pursue it alone and they couldn’t pursue it in court. They had to go to an arbitrator chosen by IBM. And what these new documents grew out of cases where a class action lawyer in Boston, Shannon Liss-Riordan, both sued IBM in court, but also said, “Fine, you want to do individual arbitration? We’ll do hundreds of them.” And she basically parallel processes hundreds and hundreds of arbitrations. And so slowly over the last few years, there has been more and more information about IBM’s practices coming out. And in this case, I believe that this grew out of a case where basically the plaintiff said, “I know the paper says I’m supposed to do this in arbitration, but I think this arbitration business is illegal and I want to challenge the right of IBM to hide behind arbitration and confidential arbitration.” And the judges yet to make a decision on whether the whole edifice, arbitration edifice at IBM is illegal. But he said as a preparatory to making a decision, he was going to allow the release of these documents. And it’s important because if there’s a pattern at the top of a corporation, there’s a pattern across many people, the thing about individual arbitration is you can’t see that pattern. You can’t bring it into the room. You have to litigate your particular case in front of likely a biased adjudicator. And so there are sort of two important things. One, it breaks the armor, but it also shows why you need the ability of people to talk to each other when they face a common wrong.

 

[00:38:26] SY: What were some of the major things that executives said to phase out the company’s older workforce?

 

[00:38:34] PG: In the new document, some of the things they said were pretty crummy. I mean, they called older workers “Dinobabies”. What’s so crummy about that is that these documents have been released in the case of a IBM executive of something like 15 years at the time of his layoff at the age of 57, who subsequently committed suicide. So talking about Dinobabies and making a species extinct isn’t a great law for a corporation, and the people we talked to and the material we worked with. Even as the company was laying off people, we could see into the system and see that these people were highly rated by the company itself on their technical expertise in various areas. There’s a legitimate issue about whether or not you guys are more digital natives than me. There’s no issue there. It’s obvious you are. I mean, I barely could get the headset to work, but what’s so interesting about the material we were able to work with and the people we were able to talk to is that we could simultaneously show that the company’s system for rating these people for layoff was biased against them because of age and the majority of them is people that are move on to other positions of the company and would be promoted. So in terms of competence that IBM itself said it thought it needed, these people rated highly. And the other thing that sort of gets you off about that is that there is a whole subset of people we were talking to, and again, this is thousands of people, who the company then hired back to do the jobs they had before, just without benefits and half to two-thirds of the pay.

 

[00:40:13] JP: Is there anything younger employees, older employees, just generally tech employees in general can do to push their employers to do better on this issue or to stop this problem they see happening?

 

[00:40:30] PG: That’s a tough one because basically 25 years ago, there’d be an answer, which is like, “Be sure they know the law.” Be sure the employers know the law and be sure fellow employees know the law and pursue it hard. But because we’ve weakened the law so much, this is one of these cases where, I mean, I may be old fashioned, but it has seemed to me that in these situations, employees need unions. There needs to be a countervailing force that they can bring to bear on employers to make them obey what’s left of the law. And there needs to be organizations that understand and intelligently pursue worker’s interests, including getting new laws passed to fill the gaps, the holes that have been poked in the existing laws on age discrimination. My children are not going to have the pensions I have. I actually have pensions and I’m fine financially. I believe that I’ve got a lot of stuff still to give and I don’t have a clear way to do it, but financially I’m fine. Now what about that 32-year-old programmer? I mean without a union, you’re going and saying, “I want you to start giving me better retirement benefits.” One thing you do is that you listen to this old guy, this old 70-year older, and that just because these companies are in great shape now doesn’t mean they'll be in great shape. And if they ask you to commit your life, not just to a job, but to a mission, you better be pushing back all the time about big pay raises, big help and help that you can stash away, not depend on them to stash it away. So I just worry that there’s this. One of the things that was amazing about the reaction to the stories, there was a huge kind of generational conflict thing. Boomers need to get out of the way. There may be something to that. I mean, and particularly in tech, right? There may be something to that, but your listeners are going to grow old too and they’re going to want to live a decent life between now and the time they grow old and they’re going to want to live it when they grow old. So the experience that my age cohort is having now, we’re not asking for sympathy. In fact, the story I told you about me is I’m doing fine. But I’m doing fine because of provisions that unions won me that are still there.

 

[00:43:06] SY: Well, thank you again so much for joining us.

 

[00:43:08] PG: You’re welcome. I’m sorry to deliver such a dire message.

 

[00:43:11] SY: That’s okay.

 

[MUSIC BREAK]

 

[00:43:24] SY: Thank you for listening to DevNews. This show is produced and mixed by Levi Sharpe. Editorial oversight is provided by Peter Frank, Ben Halpern, and Jess Lee. Our theme music is by Dan Powell. If you have any questions or comments, dial into our Google Voice at +1 (929) 500-1513 or email us at [email protected] Please rate and subscribe to this show wherever you get your podcasts.