We're talking machine learning and app fairness
In this episode, we cover some updates about Ruby 3, OpenAI, and React 17's JSX Transform, and then a disconcerting data-sharing aspect from the exercise app, Strava. Then we chat with Alex Hanna, sociologist and research scientist working on machine learning fairness and ethical AI at Google, about Twitter’s algorithmic bias toward certain photos over others. And in light of all of the recent Apple app store battles with Epic Games, WordPress, and others, we speak with Sarah Maxwell, Coalition Spokesman for the Coalition for App Fairness, about the organization's efforts to make the app marketplace a more even playing field for developers.
Saron Yitbarek is the founder of CodeNewbie, and host of the CodeNewbie podcast and co-host of the base.cs podcast.
Vaidehi Joshi is Lead Product Engineer at Forem, and creator of the Base.cs blog series and co-host of the Base.cs podcast.
Alex Hanna is a sociologist and research scientist working on machine learning fairness and ethical AI at Google. Her research centers on origins of the training data which form the informational infrastructure of AI and algorithmic fairness frameworks, and the way these datasets exacerbate racial, gender, and class inequality.
Sarah Maxwell is the spokesperson for the Coalition for App Fairness. Prior to that, she was an executive at Blockchain.com for three years responsible for communications, expansion, and new business. Before crypto, she was also an early employee at Uber, where she led communications and policy helping to legalize ridesharing in the early days and was on the founding team for UberEATS. Over the course of her career, Sarah has worked on numerous presidential campaigns and for companies disrupting the status quo.
[00:00:00] LS: Hey, DevNews listeners. This is Levi Sharpe, the producer of the podcast. We really want to benefit from your feedback on the show. So we’re gifting anyone who submits a review on Apple Podcasts, a pack of Dev stickers. All you have to do is leave a review on Apple Podcasts and then fill out the form in our show notes so that we have your mailing address for the stickers. Thanks for listening.
[00:00:31] SY: Welcome to DevNews, the news show for developers by developers, where we cover the latest in the world of tech. I’m Saron Yitbarek, Founder of CodeNewbie.
[00:00:39] VJ: And I’m Vaidehi Joshi, Senior Engineer at Forem.
[00:00:43] SY: This week, we’re talking about some updates about Ruby 3, OpenAI, and React 17 JSX Transform, and then a disconcerting data sharing aspect from the exercise app, Strava.
[00:00:56] VJ: And then we’ll be speaking with Alex Hanna, Sociologist and Research Scientist working on machine learning fairness and ethical AI at Google about Twitter’s algorithmic bias towards certain photos over others.
[00:01:09] AH: There’s something about the data that’s being used to train this idea of saliency that is very much codifying the male gaze for one and is codifying also a racialized gaze.
[00:01:23] SY: And in light of all the recent Apple App Store battles with Epic Games, WordPress and others, we’ll be speaking with Sarah Maxwell, Coalition Spokesman for the Coalition for App Fairness about the organization’s efforts to make the app marketplace an even playing field for developers.
[00:01:40] SM: I think that part of the need for a coalition like this is that there should be a group that advocates on behalf of everyone and is making sure that these issues are heard and taken seriously.
[00:01:54] SY: So on this final episode of the season, we have a handful of quick updates to some stories that we covered in a few previous episodes. The first one is about Ruby 3. So back in Episode 2, we talked about how it’s announced that Ruby 3 will have a new language for type signatures called RBS. The signatures will be written in .RBS files, different from Ruby code, and one of the benefits of this is that you don’t have to the change Ruby code to type check. Pretty big deal. Well, now you can actually play with it because the Ruby 3 Preview 1 has been released. So have fun.
[00:02:30] VJ: Also back in Episode 2, we spoke with Dan Abramov, Software Engineer at Facebook, Creator of Redux, and co-author of the Create React App. And we chatted with him about React’s decision to release React 17 with no new features, which surprised a lot of folks. Here’s what he had to say about that.
[00:02:48] DA: So with React 17, we wanted to enable this other way of upgrading, which we’re calling “gradual upgrades”. So this is not technically a feature of React. It’s not like we’ve added something to React. It’s more for strategy that we want to consciously support now, whereas previously that was not really a supportive way to use React.
[00:04:47] SY: Nice. And back in our first episode of the season, we talked about how OpenAI came out with an autocomplete program that could have huge implications for the future of artificial intelligence. The program is called GPT-3, which stands for Generative Pretrained Transformer. And essentially what is impressive about it is not only its scale, but the variety and scope of how many few shot autocomplete functions it can do, which it was not specifically trained to do. We spoke to Pedro Cruz, Developer Advocate at IBM, teaching developers how to use AI and XR about whether GPT-3 is as big of a deal as people were making it out to be.
[00:05:29] PC: It is a big deal, maybe over-hyped at this current moment. I think that there’s a lot of potential because we’re able to automate many things. For me, I’m not a really good writer. I prefer to create videos. That’s my preferred way of communicating or talking or speaking. So writing for me is difficult. So now I imagine that I can have a bot where I can tell my ideas, I can start writing and it can create a full draft of my block books. So in that sense for me, it’s amazing. For other people who are coders, I saw a demo on Twitter of someone generating the Google website. I believe it was using React and you just explain what it was supposed to be, how the website was supposed to look, and then generated this code. So I think that yes, this is the beginning. At least for me, I didn’t know of how accessible this was and how now we have this tool to be more creative. I think that that’s going to be the first thing that we’re going to see a lot of creative work.
[00:06:32] SY: And now as of last week, Microsoft teamed up with OpenAI to exclusively license DPT-3. Microsoft says they will “leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation”. However, OpenAI will still offer GPT-3 to folks with its own API, which I think is really good.
[00:07:00] VJ: In other news, Strava, an exercise tracking app, which is used by runners and cyclists, sent out an email to their users over the weekend, which informed them that they’re actually selling the data that they collect on their users. So here’s the deal. Strava sells their user data to urban planners through a product that they call “Strava. Metro”. Strava claims that the data they sell can “help cities improve infrastructure for cyclists and pedestrians”. Strava says that over four billion activities have been uploaded by athletes onto their app. And when those activities are added to the Metro product, the product effectively becomes a treasure trove of information around human powered transport, which has definitely gone up since the COVID-19 pandemic started. But this isn’t the first time that Strava is in the news for sharing user data. Just two weeks ago, Andrew Seward, the Founder of Tech Nottingham, and a Strava user tweeted about a disconcerning feature that actually allowed him to view another Strava user’s information because they had passed each other on their runs. He tweeted this, “Out running this morning and a lady runs past me, despite only passing, when I get home, Strava automatically tags her in my run. If I click on her face, it shows her full name, picture, and a map of her running route, which effectively shows where she lives.” You can turn off both this feature and the Metro data sharing feature in the privacy settings of your Strava app. But of course, both of these features default to being on, which has some pretty freaky stuff.
[00:08:36] SY: I can’t believe that is a default on. That is so much information to just give out and assume that most people are just okay with that because I’m sure most people don’t even know that’s a thing.
[00:08:45] VJ: Oh my gosh! Absolutely. I didn’t know. And I use Strava. And when I saw that tweet, I was like, “Wait a second! Turning that off!” And it’s all very hidden. There are some other privacy features that I had to like look up what it does in order to figure out what it was sharing and then I just turned everything off.
[00:09:04] SY: Yeah.
[00:09:05] VJ: But it’s a little scary that enough products out there have these features and then somebody who’s designing them is like, “Yeah. Let’s default them to on. That seems fair.”
[00:09:14] SY: Yeah. That’s just ridiculous. Coming up next, we are joined by Alex Hanna, Research Scientist working on machine learning fairness and ethical AI at Google, to talk about reports of Twitter photo algorithmic bias after this.
[00:09:46] SY: SY: Heroku is a platform that enables developers to build, run, and operate applications entirely in the cloud. It streamlines development, allowing you to focus on your code, not your infrastructure. Also, you’re not locked into the service. So why not start building your apps today with Heroku?
[00:10:03] JP: Vonage is a cloud communications platform that allows developers to integrate voice, video, and messaging into their applications using their APIs. So whether you want to build video calls into your app, create a Facebook bot or build applications on top of programmable phone numbers, Vonage has you covered. Sign up for an account at developer.nexmo.com and use the promo code DEVTO10, that’s D-E-V-T-O-1-0, by October 10th for 10 euros of free credit.
[00:10:36] SY: Joining us to talk about reports of Twitter photo algorithmic bias is Alex Hanna, Research Scientist working on machine learning fairness and ethical AI at Google. Thank you so much for being here.
[00:10:48] AH: Yeah. Happy to be here.
[00:10:50] SY: So Alex, tell us a bit about your work background.
[00:10:52] AH: Sure. So I am a sociologist by training and I’ve been focusing on technology throughout most of my career. Right now, I focus a lot on looking at the data sets that we use for training machine learning models. So that includes data sets that are pretty common within a computer vision. So those are the things that do things like facial analysis, object recognition, and other types of vision tasks. And I really want to look at these data sets from the perspective of what goes into creating them, who’s creating them, for what purpose, and how they should possibly introduce things like racial or gender or other types of bias into what they’re doing and thinking through what kinds of harms may occur when they’re deployed in downstream applications.
[00:11:47] SY: Can you describe to us a bit about what these reports of Twitter algorithmic bias have looked like from the perspective of Twitter users? What are they seeing?
[00:11:57] AH: So Twitter has this feature where you upload a photo, and at least on the web interface what it’ll show you is some kind of crop of the photo, doesn’t necessarily shrink it. It tries to focus on one thing and show you what it thinks it’s going to be the most interesting. So people were uploading pictures. They’d have one picture of a white man in a business suit and then a black man in a business suit and then upload two versions of this and then switch the position of each person. And a lot of Twitter users are coming up with the case where it would show just the white man. I even saw some of these where somebody would have a picture that was mostly of the black man and then one image of a white man and then it would actually focus on the white man in both cases. And so it was really kind of interesting to see what folks are feeding it and what was coming back. I think one of my favorites, although, kind of the one that maybe grown is someone did this with Rachel Dolezal who was a white woman who was impersonating a black woman and had been in organizations like the NAACP, then they had a black woman and it focused on Rachel Dolezal and people are like, “Oh, no, this is the worst version of this. What’s happening here?”
[00:13:26] VJ: Can you give us some insight on why that thumbnail that’s shown in these Twitter image crops would show white people over black people? There was one also I saw where they had Ted Cruz with breasts versus normal Ted Cruz and it was showing the one where Ted Cruz had breasts over the normal one. So I’m curious, why would that happen?
[00:13:48] AH: So originally I thought what was happening was that Twitter had some kind of facial analysis model and it was targeting in the kind of failure was on faces. Then I was pointed to the original paper where what apparently is happening is that Twitter has this model that measures something kind of this more amorphous concept called salient and people were saying, “What’s happening is it’s trying to find particular sorts of contrast in the images and focus on these areas of contrast.”
[00:14:17] VJ: Can you define for us one more time what salient is?
[00:14:21] AH: I don’t know what it is. What is salient? Salient is a thing that I think has no objective definition. Salient seems to indicate what is “most important” to an image when a person looks at it, but that’s going to be pretty subjective. It’s kind of interesting because if you have some kind of metric of salience where it focuses on something like interestingness or it’s trying to focus on something that’s pretty poorly defined, like things like interestingness or salience are pretty subjective concepts. So their model would focus on something that was salient. What happens or what can happen in these situations is that no one is writing a machine learning classifier to say only focus on the white person. At least, I don’t think anyone at Twitter is doing that. There might be other one of various people doing this, but there are ways in which these focuses on some kinds of thing that is not well-defined, not well scoped, what they’re doing is actually introducing something that looks like bias. It looks like this technology is biased against a particular group of people. This is not intentional, but it is a byproduct and it is another vector in which bias, racism, sexism are operating. Another example that comes to the fore that’s not based on race but you sort of alluded this Ted Cruz example was there was a set of images that were uploaded by a group, I think, that was having a set of speakers, including some friends of mine who are women, as well as the very well-known computer scientist, William McCune, and another male computer scientist, and it focused on the male computer scientist’s spaces while it focused on the women’s breasts. And so this already tells you that there’s something about the data that’s being used to train this idea of saliency that is very much codifying the male gaze for one and is codifying also a racialized gaze, the desire to look at white faces, and we don’t really know where that’s happening in the model. There are so many different parts of the process, including the data, including the bias checks that Twitter said that they did on this. And it goes up and up the line to where is this actually being introduced?
[00:17:12] SY: So I know that we, as the consumers, don’t really know where Twitter exactly pulls their data from to train their algorithm, but do we know where this kind of data typically comes from? And is there anything we can do about it to make it less biased?
[00:17:25] AH: So I haven’t read the original paper. My sense though is that what they’re doing and what folks generally tend to do with these data sets that they create is that they have some kind of a notion let’s say of saliency. What they’ll do is they’ll construct a set of rules that say, “Okay, let’s create an instruction book for annotators or human annotators and say, ‘Okay, of this image, click on this thing that you think is the most interesting.’” Or some more let’s say, “objective”, type minded machine learning engineers may use eye tracking software and the annotators they’ll typically use will be these prop working services, often very low paid, poorly supported labor. For one, that’s done either by folks who are contingent or underemployed or they’re done in poor non-Western countries. And so then these data or these annotations get outsourced to these services or to these firms and then they’ll return and they’ll say, “Okay, so this is where the saliency is.” And then companies are taking those as like ground truth. There may be some kind of data quality checks, but there’s so much more in what’s happening. There’s some great research that Milagros has done where she’s done epigraphic work with data annotators. One of the things that her and her research team have found is there’s not really good communication between like the firms and the annotators and the people requesting them. So if there is this idea of salience, which is very subjective, they don’t really have a way of conveying that like back to the requester. Or if that does get conveyed, it sort of stops at sort of the top of that firm and it doesn’t really go any further. And so really we don’t really know very much about how that’s affecting these sorts of results or these types of errors. And Twitter has some texts that they said they had some texts on the backend that said, “Oh, we checked this for bias and whatnot and we didn’t turn the text,” which is good and fine and I think that work is important. I argue further that we need to go back and we really need to think about these annotations and who’s doing them.
[00:20:02] VJ: Twitter says that it’s now investigating these reports. What would it actually look like for them to start mitigating these algorithmic biases that we’ve been talking about?
[00:20:12] AH: So Twitter said that they had done some bias assessments. I think kind of like the citizen science that we saw folks doing this. There are methods and ways of doing internal audits of those models where they are doing some of that work. I mean, that’s known as robustness testing or in some teams adversarial testing. We also need to take a step back and think about what this functionality is doing, if it’s desirable, if it’s ever actually going to work, if there’s this thing like saliency. Can we actually have something that is salient that is “unbiased”? And it might be, in my sense, sort of nonsensical enough that maybe saliency is not actually what you want to aim for when you do something like cropping. You do something that may just take the centroid of an image and then focus on the centroid or you take a step back and you think, “Okay, do we need any kind of cropping at all or do we have kind of an image preview that shows the entire image? Is it better to redesign this altogether and think of another way to do this that doesn’t become another vector of bias?”
[00:21:29] SY: So one of the responses to the complaints, the comments about the bias that was shown is from the communications person at Twitter, who basically said, “We’re working on it. We didn’t know it worked this way and that we’re going to open source our technology so we can show you what we’re working on,” which made me wonder what is the role of the open source community in kind of figuring out these biases, especially when it comes to dealing with them in corporations and in companies that make revenue and make money and profit? What does that relationship look like?
[00:22:05] AH: That’s a really good question. I think one kind of role for the open source community, especially when it comes to things like looking at models, is if a company has open source a model and it might be an opportunity to take that model into, provide it with a suite of tests or do some kind of robustness testing against that model and assess whether it works for a much better range of people. It’s a nice sign to see that Twitter is willing to release some model and provide it. I would say that the harder part of it is that it would be kind of wonderful if Twitter can release the data that’s used to train that model, but releasing the data to train models I know has many more issues, especially if you’re thinking about who owns the copyright on those data, whether they even have the right to host the data, who that data belongs to. And so then we get into some kind of tricky, legal territory where when someone uploads something to Twitter, they sensibly still own that image or sensibly still own the copyright of the image. I don’t know any of the secrecies on Twitter’s terms of service. So can they actually release those data? And then what does it mean for folks to download those data? And this goes as well from the annotations annotation work because it takes so much human effort. It’s pretty expensive to obtain. Do they actually want to release those annotations? And I would actually love to see companies release more of their data. I think that’s a pretty important missing step to really showing the transparency in their pipeline. Releasing the model gives you an end point and it gives you an inference. But at the end of the day, we really can’t make definitive claims about what this model was actually trained on once we have those data. And some companies have internal tests of their data, but it would be kind of wonderful to also get that release in some sense, have potentially some kind of evaluation server or some way that they can control how this data gets distributed. In that way, researchers, the open source community can do audits of the data instead of having to rely just on the end point of the model and kind of guessing what data it’s been trained on. That’s sort of the missing link. That would be really helpful for kind of a community accountability and for transparency into what these models are doing.
[00:24:36] SY: Thank you so much for joining us.
[00:24:38] AH: Thank you.
[00:24:45] SY: Coming up next, we chat with Sarah Maxwell, Coalition Spokesman for the Coalition for App Fairness after this.
[00:25:03] SY: Heroku is a platform that enables developers to build, run, and operate applications entirely in the cloud. It streamlines development, allowing you to focus on your code, not your infrastructure. Also, you’re not locked into the service. So why not start building your apps today with Heroku?
[00:25:20] JP: Vonage is a cloud communications platform that allows developers to integrate voice, video, and messaging into their applications using their communication APIs. So whether you want to build video calls into your app, create a Facebook bot or build applications on top of programmable phone numbers, Vonage has you covered. Sign up for an account at developer.nexmo.com and use the promo code DEVTO10, that’s D-E-V-T-O-1-0, by October 10th for 10 euros of free credit.
[00:25:54] SY: Here with us to talk about the Coalition for App Fairness and its efforts to make the app marketplace a more even playing field for developers is Sarah Maxwell, Coalition Spokesman. Thank you so much for being here.
[00:26:05] SM: Thank you for having me.
[00:26:06] SY: So Sarah, tell us about the Coalition for App Fairness and what its mission is.
[00:26:10] SM: Yeah. The Coalition for App Fairness is an independent nonprofit founded by a bunch of industry-leading companies and app developers to advocate for freedom of choice and fair competition across the app ecosystem.
[00:26:24] VJ: Is there a main person or a specific group of people who came up with the idea for this coalition?
[00:26:30] SM: The coalition was founded by 13 members across the US and Europe. They range from well-known brands to sort of smaller ones. We have the Blix, Basecamp, Blockchain.com, Deezer, Epic Games, the European Publishers Council, Match Group, News Media Europe, Prepear, ProtonMail, SkyDemon, Spotify, and Tile. Basically, they all had their own individual experiences against Apple and with the App Store, not positive experiences if you can imagine, and had gotten frustrated about the treatment they had been experiencing and want it to really come together as one collective voice to sort of take these issues on without having to pursue legal action. So they formed the coalition and it’s available and open to any app developer creator who has had similar treatment or cares about these issues and wants an even and level playing field for the app ecosystem. We’ve seen a huge response since we launched last week and are kind of working through all the various membership inquiries, and I expect us to have a lot more new members in the coming days.
[00:27:50] SY: Tell me a little bit more about this treatment. We’ve had conversations about this on the show and obviously a lot of conversations on social media and just in the internet in general about this treatment, this unfairness, but I’d love to kind of just hear it from you. What is the definition of unfairness and treatment for the coalition?
[00:28:08] SM: We focused on three issues that we believe are the biggest problems that the ecosystem is facing today. So the first one is anti-competitiveness. So the behavior of Apple and other platforms in being anti-competitive and self-preferencing their own apps or services. The 30% app tax that is charged to app developers and passed on to consumers. And then the third one is really about consumer choice. So you really don’t have a lot of freedom when you purchase an iPhone or others in terms of what apps you want to interact with.
[00:28:49] SY: So what should the fee be? If not 30%, is there a number that you all are advocating for?
[00:28:55] SM: So first we put together what we call our 10 App Store Principles, which outlines what we believe would create a level of playing field on a platform or within an app store. And those 10 encompass a wide variety of things. So everything from a developer’s data should not be used to compete with the developer, to every developer should have the right to communicate directly with its users for legitimate business purposes and also no app store should engage in self-preferencing of its own apps or services. So we have these 10 principles that we believe will create that level of playing field and allow for there to be consistency and fairness across the entire app ecosystem. When it comes to the app tax itself, 30% is a really high number. We have yet to see Apple justify the need for 30%. And I think it’s really open to discussion as to what the number actually should be and what does that fee cover. It’s really opaque and unclear what the 30% is paying for right now. And because Apple doesn’t charge 30% across every app, some places they charge 0, some places they charge 15, and in other places 30, it is hard to know what the perfect number would be.
[00:30:23] SY: Interesting. So if everyone were charged 30 and that was open and kind of known, then no matter who you are you get charged 30, would that be acceptable? I’m trying to understand, is it the fact that people are paying different percentages or is it the fact that 30 itself is too high or both?
[00:30:39] SM: I think it’s a little bit of both. Thirty percent is an extremely high tax if you don’t know what you’re getting. Now if it’s just for payment processing, like just for using Apple’s payment systems, the average transaction fee costs about 5% from other services. So if that’s what the purpose of the fee is, then we’d expected to be in line with other industry standards. Now should the fee be charged to everybody? I think there should be consistency. The problem with Apple’s App Store policies is that they are quite arbitrary. They apply to some, not all, they don’t apply to their own apps and it makes it really confusing. So if you’re creating an app, and we just saw this recently, with services that had to pivot during the pandemic from in-person events, for example, to online, and that’s as simple as you used to book a yoga class in the studio and now you’re going to be joining a streaming version of it. That type of app wouldn’t have been charged 30%. They would have been charged something lower, but because they’ve shifted they’re not subject to that larger fee. And I think that that’s a really tough thing for these businesses to have to adjust to, giving away a third of your fees to Apple for a reason that you can’t really explain or for services you’re not sure what you’re getting. On top of the $99 developer fee it costs to even be a developer on the Apple App Store, it’s challenging.
[00:32:21] SY: So if you had it your way and the things you were advocating for came into effect and happened, who do you see it affecting? Do you see it affecting kind of indie developers or is this more kind of to protect companies, more established corporations? Who do you see it impacting the most?
[00:32:37] SM: Well, it should be a positive change for everybody. I think that the individual developers really do feel the burden of Apple and the struggle of having to communicate with the App Store reviewers, especially in a situation where maybe the current bill that they have was denied or rejected and they don’t really have a clear understanding as to why, like I’ve heard that time and again is, “Our app was rejected for X reason, but they didn’t give us any details. And so we had to guess and adjust our UI and UX, submit another build. It was rejected again.” And then go through this kind of constant process that isn’t really sustainable. And it’s something that the principals would ensure that wouldn’t happen. You should understand what specific issue is causing your app to be rejected. You also should just have normal communication. There’s no reason that you should go to bed and worry the next morning if your app has been pulled from the App Store with no notice. That’s happened to a number of our members in the past. So that just seems basic regardless of how big you are or like your size. That just seems like a very standard thing that we should all expect.
[00:34:00] VJ: This coalition feels very much like a kind of union, but maybe third parties rather than specifically Apple employees. Do you feel like that’s an apt comparison?
[00:34:12] SM: There isn’t a coalition that’s like this out there for the tech space today. So yeah, I could see where the comparison is sort of similar because there really isn’t anything of its kind previously. I think that part of the need for a coalition like this is that there should be a group that advocates on behalf of everyone and is making sure that these issues are heard and taken seriously. And they haven’t been for the last basically decade and there hasn’t been any meaningful change. And if anything, I think Apple’s gotten more aggressive with their treatment of app developers and created this very hard, difficult environment for them to operate in. So we certainly would love to give a voice to those who had that treatment before and work with the platforms to make sure that recruiting these great ecosystems that people are really happy to create on, consumers benefit from, and it looks like the future we’re all moving towards. I think that’s a really positive thing.
[00:35:19] SY: So I think that the things you’re fighting for make a lot of sense to me and clearly you have some strong names behind the coalition. But I’m wondering if I am Apple, why would I listen to you? It’s definitely not to my advantage to decrease my 30% fee, for example. And so what leverage do you have? What power do you have? Why do you think that Apple would frankly take these objectives and these goals seriously?
[00:35:48] SM: I think if Apple wants to be a fair competitive market and offer a really great user experience, which is something that they care about deeply is making great products that people like to use. The iPhone or iPads are not as functional or useful or going to be as popular if your favorite brands don’t have their apps on them. So I think it just depends on maybe they need to decide, should they just offer an iPhone that only has Apple specific apps on it or should they be more open and consistent and fair with everybody else? We’re kind of in the middle right now. It’s not working, but they’re going to have to innovate. And I think that we’ve seen them make changes in the past and be very creative and come up with awesome products. So I hope that they are willing to engage on this issue and take the principles that we’ve created seriously and make efforts to adopt them.
[00:36:52] SY: So recently there’ve been a number of lawsuits and a bunch of just different public things that have happened with different companies, different, very well-known companies and they’re fighting against Apple and some of these practices, where does the coalition stand on those issues?
[00:37:08] SM: So the coalition isn’t a member of any or a party to any of the lawsuits. And actually, we believe in being proactive and advocating versus taking sort of the litigious legal route. I think there’s a lot of ways that you can impact and make change. And our belief is that we can do that through advocacy, through engaging with the various platforms, and trying to get them to adopt the principles that we’ve outlined. There hasn’t really been any other way for companies with the exception of speaking on the media and then filing a lawsuit to actually take on Apple in a meaningful way, in like a way that they would listen until now. And that’s really what the coalition is all about is providing that collective voice that Apple will take seriously.
[00:37:58] SY: So how can people get involved? Is this something that developers can join, companies can join? What does that look like?
[00:38:06] SM: We are open for membership. I’d love to have a wide diverse group of developers and companies participate and join the coalition. You just visit our website, appfairness.org, and you can sign up there. Another way to support our efforts is to follow us on Twitter and our social media channels and help to get the word out. And then the other thing is share your stories. I think there’s been a fear of retaliation and a fear to speak out. Previously, the more stories that we have and the more that we can bring to light, in terms of the various behaviors and treatment that developers have faced over the years, I think the better, the stronger we will be. So I’ve already heard from a handful of people in the last couple of days or actually even the last 24 hours about their experience. Somebody just sent me something this morning and it’s real, like the unfair treatment and monopolistic behavior is really real. So the more people that are willing to share their experiences and kind of call Apple out the better, but we believe in strength in numbers. So certainly we’d love to have everyone join the coalition and help us kind of get the word out about this in a very collective way.
[00:39:29] SY: Well, thank you so much for joining us.
[00:39:30] SM: Thank you so much for having me.
[00:39:42] SY: Thank you for listening to DevNews. This show is produced and mixed by Levi Sharpe. Editorial oversight by Josh Puetz, Peter Frank, Ben Halpern, and Jess Lee. Our theme music is by Dan Powell. If you have any questions or comments, dial into our Google Voice at +1 (929) 500-1513. Or email us at [email protected] Please rate and subscribe to this show on Apple Podcasts. See you next season.