Season 2 Episode 4 Sep 2, 2020

What Are Our Ethical Responsibilities as Developers?

Pitch

Just because something can be made, doesn't mean it should, and we should be thinking about that more in our code.

Description

In this episode, we chat about ethics in code, with Nashlie Sephus, applied science manager at Amazon Web Services, AI, and Abram Walton, Director of the Center for Lifecycle and Innovation Management, and former Director for the Center for Ethics and Leadership at Florida Tech.

Hosts

Jess Lee

Jess Lee is co-founder of DEV.

Ben Halpern

Ben Halpern is co-founder and webmaster of DEV.

Guests

Nashlie Sephus

Dr. Nashlie H. Sephus is the Applied Science manager for Amazon’s Artificial Intelligence (AI) focusing on fairness and identifying biases in the technologies. She founded The Bean Path, a non-profit organization based in her hometown of Jackson, MS assisting individuals with technical expertise and guidance.

Abram Walton

Dr. Walton is a Professor of Management and Innovation at Florida Tech, specializing in Technology and Innovation. In addition to his academic pursuits, he serves as the Chairman of the Innovation Council and on the Executive Board of Directors for the Space Coast Economic Development Commission. He is a Senior Partner of a management consulting and technology commercialization firm.

Show Notes

Audio file size

67422206

Duration

00:46:49

Transcript

[AD]

 

[00:00:01] JL: This season of DevDiscuss is sponsored by Heroku. Heroku is a platform that enables developers to build, run, and operate applications entirely in the cloud. It streamlines development, allowing you to focus on your code, not your infrastructure. Also, you’re not locked into the service. So why not start building your apps today with Heroku?

 

[00:00:18] BH: Fastly is today’s leading edge cloud platform. It’s way more than a content delivery network. It provides image optimization, cloud security features, and low latency video streaming. Test run the platform for free today.

 

[00:00:31] JL: Triplebyte is a job search platform that allows you to take a coding quiz for a variety of tracks to identify your strengths, improve your skills, and help you find your dream job. The service is free for engineers and Triplebyte will even cover your flights and hotels for final interviews. So go to triplebyte.com/devdiscuss today.

 

[00:00:48] BH: Smallstep is an open source security company offering single sign on SSH. Want to learn more about Zero Trust Security and perimeter list user access? Head over to the Smallstep Organization page on Dev to read a series about Zero Trust SSH access using certificates. Learn more and follow Smallstep on dev.to/smallstep.

 

[AD END]

 

[00:01:12] NS: I really like to see technology solve problems and in a way that is not disruptive in a bad way, in a way that violates people’s rights and harms people.

 

[00:01:34] BH: Welcome to DevDiscuss, the show where we cover the burning topics that affect all our lives as software developers. I’m Ben Halpern, a co-founder of Dev.

 

[00:01:41] JL: And I’m Jess Lee, also a co-founder of Dev. Today, we’re talking about ethics and code with Nashlie Sephus, Applied Science Manager at Amazon Web Services AI, and Abram Walton, Director of the Center for Lifecycle and Innovation Management and former Director for the Center of Ethics and Leadership at Florida Tech. Thank you both so much for joining us.

 

[00:01:58] NS: Thanks for having me.

 

[00:01:59] AW: Thanks for having us.

 

[00:02:00] BH: So Nashlie, you have a very interesting background in machine learning and AI. Can you tell us a little bit about what you have been doing and how you ended up at Amazon?

 

[00:02:11] NS: I started somewhat in a non-traditional way into Amazon. I was a CTO of a startup company called Partpic that was acquired by Amazon, and I was one of the stakeholders. So we were able to sell the company. Partpic basically did visual search for replacement parts. So we would train artificial intelligence and computer vision models to recognize and measure parts by taking a picture of it. We’re taking a video. And so I was presenting about that in Boston, and in May of 2016, Amazon was in the audience and one thing led to another and it happened. So it was kind of just nothing we’re really expecting. It just kind of all just happened. And before that, I did my undergrad in computer engineering at Mississippi State University, and I did my PhD in computer engineering at Georgia Tech.

 

[00:03:01] JL: So how has your role evolved since you’ve been on AWS? What exactly does an applied science manager do?

 

[00:03:07] NS: So I’ve done the whole gamut. I’ve managed people, engineers. I’ve managed scientists. I’ve managed data analysts because, of course, in machine learning and artificial intelligence, there’s a lot of data involved in cleaning up the data, sanitizing the data, labeling, annotating. I’ve been there a little over three and a half years now. So first two and a half years, I worked on the visual search team. So who’ve ever used the Amazon Shopping App, you can open it and click on the camera button and it’ll help you search for whatever you take a picture of. And so we worked on that and then I switched to the AWS, which is Amazon Web Services AI team, a little over a year and a half ago where I focus on fairness and biases in AI technology. And it’s been pretty interesting. I’m actually not managing people now. I do have a summer intern, but I’m more so an individual contributor at this point.

 

[00:04:01] BH: Did you trigger the move into fairness and bias in AI or was that asked of you based on your prior work?

 

[00:04:09] NS: Well, it started with a conversation that I was having with a speaker at a conference. It was an internal conference on computer vision. And I remember sitting down and talking with this individual and we just started talking about the current state of things, especially face recognition, technology, face analysis, and search results and speech analysis, and how some of these things could be affected by various types of biases that get introduced. So one thing lead to another, and they were offering me a job. So that’s kind of how it happened. It was, again, another situation where I wasn’t planning on switching teams. It’s kind of good to know just being me. People can see some potential and they are interested in helping me further my skills. So I really appreciate that.

 

[00:04:59] JL: Abram, can you tell us about the Center of Ethics and Leadership?

 

[00:05:03] AW: It’s an interesting center. I think Nashlie asked about Harris before we started the call. And companies like Harris, and now it’s L3Harris, a big tech firm, they were the leaders of coming to folks like myself and other industry experts and saying, “We really want to discuss and push ethical considerations and mindsets and frameworks down into the collegiate level, but also why don’t we start even earlier and go into the high school level?” So there was sort of a flavor of things we did over the years. We had I guess your smaller events like luncheons, guest speakers, and I’ll give you a couple ideas of what we covered there, up to half-day, full-day conferences. And again, largely this is either high-school focused or collegiate, student focused all the way from undergraduate to grad, and then the inclusion of faculty as well as industry. So we usually have really good attendance, several hundred people in attendance at every event. Once a year, we did what’s called a High School Ethics Competition. So we actually threw well over a hundred thousand dollars in scholarships and awards to the high school teams to give them some really difficult scenarios and usually involving some hot topics. So it could be tech, it could be something international or something that was recently in the news and we give them, within one day, they’d read the case, they’d analyze the case, they’d make the recommendations, and then they’d present to a board of experts. And this board of experts usually included maybe chief legal counsel. In this case, like for Harris, Harris was a major contributor. They were always there as a judge. So that’s sort of, I guess, the breadth of what we did. I can get into technical, more specific things if you have questions.

 

[00:06:45] JL: So it seems like ethics is something that is really becoming more at the forefront of discussions within the tech industry. I want to hear both of your takes on our current state of ethics within tech.

 

[00:06:56] NS: Sure. Sure. That is a big topic. And so I will start with, of course, my lane, which is artificial intelligence, machine learning. As a producer and a developer of this technology, as well as a consumer, I see many different angles. I see the concerns that people have about their rights and their privacy being violated with certain technologies. For example, face recognition, face analysis, several software products, even data breaches, in Facebook, for example, what are they doing with your data. And then of course, I’m also a technologist who I really like to see technology solve problems and in a way that is not disruptive in a bad way, in a way that violates people’s rights and harms people. And so it often becomes a thin line and some people would say, “Okay, don’t sell the technology. Let’s not partake in the technology. Let’s regulate it better.” And then on the other hand, once you released software or products, it’s often very hard to enforce people not using it anymore once it’s out there, which this is an actual case with Flickr, training models with faces of people who have not agreed to be trained with. There were also some articles about people, for example, saw pictures of their kids in this photo database, and they had no recollection of that prior to. There’s a New York Times article that says, “Over 50% of the people in the United States, your face has already been used in some database for machine learning and AI, whether you know it or not.” And those numbers actually increase in places overseas. So what we’ve done is we’ve built things that people thought were cool, but now we’re running into issues to where it’s not just a matter of, “Hey, can we do it or not? Should we do it?” And that takes a lot of different people. It takes different stakeholders. It takes public policy. It takes government. It takes legal, and PR certainly has a lot to do with it as well. For example, if things and narratives are being pushed that are not true, that’s a concern. And then if there are narratives that are revealing and we’re transparent that companies are wanting to not be transparent for whatever reason, that’s definitely a problem too. So this is the current state. That’s kind of like my day-to-day and the things that I think about and deal with on my job.

 

[00:09:31] BH: I’m wondering where everyone on this call falls on the spectrum between techno optimism and pessimism. Because I feel like that tends to inform how people think. If you’re overly optimistic, you’re maybe unwilling to have this discussion. But if you’re overly pessimistic, you might be too quick to shut things down in a way that might not be helpful. So naturally, where do you fall on this spectrum? How do you feel about tech in general?

 

[00:10:00] NS: So I believe that technology can help a lot of people. I do believe that more thought and measures have to put in place before releasing these technologies and involving a wider group of people. That’s something that government is now listening to. A lot of corporations are now listening about diversity and inclusion in technology. Also, businesses and entrepreneurs and funds and funding for certain startups and technology and innovation, people are realizing that the systematic racism that can exist there and the same exact thing goes with technology. It is not exempt and it should be considered in the same exact way. And I think what you’re seeing right now, people are finally starting to, at least one, listen and hear the other side of the story, and then two, take more action and not just stand by. So again, I think I’m definitely on the spectrum of let’s hear what both sides have to say and make sure that we’re considering all things. And we want to make sure that the technology is fair and unbiased. Sometimes biases are preferred, depending on the use case. But if that’s not the case, then we need to make sure that, one, we’re ethically sourcing data, two, we’re testing it on our own fair amount and very diverse set of tests population, and that we have certain government regulations to say, “Hey, this is not okay to do.” So I think we still have quite ways to go. I’m in support of halting any sales or anything that needs to be halted until we figure those other pieces out.

 

[00:11:37] AW: I like Nashlie’s, one of her last comments there about as we source data, because like in the academic world, we do studies all the time and we have to go through what’s called an institutional review board where if we’re collecting data that’s not fully innocuous, anonymous, and de-identified, then we have to provide and obtain an informed consent authorization from participants. And whether it’s anything from just to, “How do you feel about something surveyed,” to, “Let me do some medical testing on you.” You have to do this fully informed consent. We see this in the tech world under user agreements and things like that. When you unpack this ethically sourced data topic, which is interesting, it’s very question begging. It’s like, “Well, what exactly does that mean?” Because especially in the tech world, I think a lot of us can look at things and go, “Well, I know how that could be used,” and if it could be used that way, it probably will be used that way and just because someone, if they leave let’s say an Instagram account that’s opened, that doesn’t necessarily mean they hoped for someone to go make a deepfake using their pictures.

 

[00:12:36] JL: For an audience that’s not familiar with what a deepfake is, can you please explain?

 

[00:12:41] AW: So for a deepfake, if I get enough pictures and enough different variations of their face, because if you think about what’s on Instagram a lot of time besides food is selfies, right? So you get someone’s facial shot over time from massive different amounts of angle, including video, and so you can put that into big, big data systems and train an AI system to basically create a video game of their face, but that looks and sounds real. And a deepfake just means it’s so fake it’s deep. I mean, like it’s deep all the way down through the essence of someone’s, the way they say certain words, their various facial expressions, if they happen to have a twitch or a slur, or if they have an accent. I mean, it carries all of that with it. So it’s deep. So ethically sourced data I think is critical. I tend to lean towards the more optimistic side. However, I know that there are people who will make the assumptions that if you let your personal data or whatnot be openly shared, it’s almost like they go, “Well, then they should have known better. They shouldn’t have shared it publicly.” This is what’s sort of vacillates years ago. I think that it’s incumbent, therefore, the responsibility on the tech side to go, “The lay public really doesn’t understand what they’re sharing, let alone the downstream consequences, how it could later be likely used, what could happen with a picture I shared 10 years ago and maybe a deepfake later.” Because the idea of even deepfakes didn’t exist in the level we’re seeing now. So that’s what I think builds this sort of vacillation. And I think what’s more important, I would think I’d be leery of someone who sort of has a hard-and-fast, “Here’s my belief on the thing.” If you’re going to release something, you should accept fully the consequences. Well, because what’s possible with tech does change over time, and therefore, the consequence potential, I think, changes over time. So I think we need to be consistently aware of and reflective on and then also communicative towards the users.

 

[00:14:35] NS: One use case of deepfakes is somewhat controversial as well, but in general, what’s seen as a more positive case is all this data that we’re needing to diversify our face recognition training and testing datasets. If we can create diverse set of fake faces of various skin tones, various ethnicities, different age groups, then perhaps we could improve our machine learning models without having to get data unethically from other sources or potentially purchase it some other way. So that is one use case that people use to try to improve their models. And that’s something that’s an ongoing field of study that hasn’t really proven. I mean, there are some studies that have been proven successful, some that have not, but I think you’ll see a lot more that come in the future.

 

[MUSIC BREAK]

 

[AD]

 

[00:15:48] JL: Over nine million apps have been created and ran on Heroku’s cloud service. It scales and grows with you from free apps to enterprise apps, supporting things at enterprise scale. It also manages over two million data stores and makes over 175 add-on services available. Not only that. It allows you to use the most popular open source languages to build web apps. And while you’re checking out their services, make sure to check out their podcast, Code[ish], which explores code, technologies, tools, tips, and the life of the developer. Find it at heroku.com/podcast.

 

[00:16:21] BH: Empower your developers, connect with your customers, and grow your business with Fastly. We use Fastly over here at Dev and have been super happy with the service. It’s the most performant and reliable-edge cloud platform. Fastly CDN moves content, data, and applications closer to your users at the edge of the network to help your websites and apps perform faster, safer, and at a global scale. Test run your platform for free today.

 

[AD END]

 

[00:16:48] BH: Where do you think regulation comes into play when we’re talking about these problems? Some of the things we’ve discussed can be exceptionally harmful. And if we’re talking about data sourcing and privacy, that’s an obvious area where laws tend to come into play. How do you think that should be handled?

 

[00:17:07] NS: So I’ve been to Capitol Hill a couple of times, speaking with members of various organizations and Congress policymakers. What I’ve found is that majority of these people, and it’s totally understandable, they really don’t understand how the technology works. And that’s not that I never really thought about as an engineer, as a technologist, having been around engineers most of the time at work and at school and graduate school, and then to turn around and say, “Okay, you have to communicate why people think that this technology doesn’t work to someone who has no background in technology, whatsoever.” It’s probably an older individual who is probably not as tech savvy. And how do you break it down? And then on top of that, coming from a place like Amazon, for example, we’re often seeing in a light that isn’t as positive for a number of issues. So what credibility do you even have coming from a company like that? So my job was very interesting in having to figure out how to break these concepts down, make it easy to talk to, which is something that I’ve kind of been doing. I mean, when I talked to my grandmother and my mother and I tell them about what I do, that’s some practice that I’ve done throughout my life. So I had to adjust slightly, but I also had to paint a different picture. We’re also combating a lot of what the PR has already mustered up for people. And so it can be quite difficult. And so I totally understand the challenges here with the legislators and I think that we still have a responsibility. All the technology developers and the companies and industries has a responsibility to answer to these people and to help them understand why this is or isn’t a concern.

 

[00:18:56] AW: That was exactly where my mind went first. The whole scenario you just unpacked, I watched some videos sometimes on Capitol Hill with people trying to explain. I just love this one. It’s kind of a famous one where they were grilling Zuckerberg on, “Well, how exactly does it make money?” When you do try to break it down in that level, what you end up with sometimes is producing in their mind an embodiment of what the tech is doing, but I think we have to be careful because I think quintessentially when we’re trying to explain just what it does, it can sometimes produce a little bit of fear, like, “Oh my gosh, I can’t believe they can do that with my photo and I never wanted them to do that.” And before you know it, people have knee jerk reactions to that spectrum that was introduced earlier and they became just like, “Just shut it down,” without even fully understanding, let alone understanding the value tradeoff. So I think when we’re talking about informed consent and these ethical sourcing of data and whatnot is if personalizable information or what we might consider private information is being provided and or used. I think businesses would just do better, quite frankly, upfront if they better explicated or explain what the value tradeoff was. So for instance, when it became clear that cookies were first tracking your movements on the internet, people used to turn them off. And even now, if you don’t turn on certain features in your map functions, your map is not going to produce for you, really, what could be considered valuable results. If you happen to like pizza, every time you go by this particular area in town, every time you pull your map up, it shows you pizza locations. Well, it only knows that because you’ve sort of trained it. Right? But then as soon as you tell someone you’re training something, they get scared. But that’s the whole lack of informed consent, if you will, and now you try and explain this to lawmakers who are decades behind.

 

[00:20:47] JL: On the other end of things, what kind of advice do you have for everyday developers that want to think more critically about the ethical impact of their work?

 

[00:20:56] NS: Consider all the viewpoints. I think oftentimes as technologists, sometimes we can get very ego driven and thinking that our opinion is the only opinion that matters. Consider what are the implications. One thing we do at work is that we do what we call an exercise where we work backwards and we say, “What is the worst thing that could happen?” So like what would be the worst possible headline to come out in the news based on this product that we’re trying to conceptualize. And then we work backwards from there to make sure that that doesn’t happen. I think also when you’re learning how to develop these various types of technologies, read more into not just the technology side, but read about some of the policy and the legal issues around it. And even if there’s not any of that, we consider other institutions, like, for example, the Brookings Institution, the Human-Centered AI at Stanford, these are organizations that try to raise awareness for all these other topics. They come around the technologies that we think are just simple, carefree technologies. So I would just say do your research.

 

[00:22:07] JL: And Abram, as an academic, what would you do to create the perfect CS curriculum that would have ethics and code as a requirement look like?

 

[00:22:16] AW: I think what the literature shows is pretty clear that it’s actually more financially beneficial, companies are more successful when they upfront and intentionally incorporate ethical frameworks into their thinking and their people individually and also corporately there are structures that employ them. So we have built relationships across campus where faculty would give credit for or would use some of the cases that we had employed in our programs at the center into their courses. So we help build them, even just these plug-and-play modules that you can go into any tech course. So I mean because I actually used to be professor in industrial engineering and technology, and so we would do the same thing. We would say, “Okay, if you’re building a program and you submit, let’s say, a user report or a technical document or whatnot, we would ask them things like, Nashlie gave a good example, “What’s the worst headline that can come out of this?” The more they get used to thinking like that at an early basis and understanding that companies are going to hire for that, but you can bake that into the curriculum, that it should be a repeated thing. I’ll tell you one of the worst things you can do and you see this in some universities. If they have a single course on ethics and that’s the only portion of ethics that a curriculum contains, that is not doing it, because then students go in, they think ethically for this one class in one semester and everywhere else is devoid from, and instead what the literature shows in the sciences has been pretty clear on this is that the universities and programs that are putting out the thinkers, the thought leaders around ethics in tech have a module, and it doesn’t have to be huge on every class, but where in every assignment that you possibly can that even if it’s a five-minute conversation on ethical considerations around what if, what’s worst-case scenarios, how could this data be misused if it was maintained for a long time and there’s a lot. I mean, you look at airplane. The data on airplanes are kept for a hundred years. The data on other things are only kept for a couple of weeks. One of the modules we had was the idea of unintended consequences is kind of a farce. So you can have undesirable yet intended consequences. And we see this in dirty areas like war, some people call it collateral damage, and that would be an undesirable yet intended, meaning it’s like, “I’m still going to go forward with it.” But what happens is that programmers use that phrase “unintended consequence” as a lazy way to say, “I didn’t have the skill set, desire, time or take the effort to forwardly think about the foreseeable consequences. And therefore, I call them unintended.” But the reality is if I had just taken the time, I could have foreseen them and then they might’ve been undesirable, but still intended. A lot of the times that we see what people call unintended consequences are just because they were too lazy to think down the road. Because if they had noticed what those consequences would have been, they might not have proceeded otherwise, and if we can bake that into computer science curriculum on the variety of projects. And I’ll tell you one that we’ve done and I think every university that’s worth their credentials is they have a senior design program and our students go through that. Senior design, usually it’s a year-long project. You’re programming something, you’re building some app, whatever it may be, and that the judging the output requirements and the winning teams have to have considered and discussed and presented on what are the ethical ramifications. And again, in the room on the panel are sitting judges saying, “Did you think about how this data could be used in 10 years?” So I think there’s a couple of keys there. It can’t just be one class. It has to be embedded throughout and promoted by faculty who have enough industry experience to give legitimate reasons why things can go wrong.

 

[00:25:51] BH: My recollection of that, the way ethics was taught to me in the CS that I did take was that it really varied professor to professor how much they thought ethics was a thing. Some people kind of are dismissive and some people it was more ingrained depending on their particular perspectives. But no matter what, it always seemed a little abstract. It rarely was speaking to truly affected people. It was always a lot, pretty high level, pretty academic, and I think we could stand to make it a little bit more personally human. I think things are definitely going that way in the mainstream. I wonder if academia is making things a little bit more concrete in that way.

 

[00:26:35] AW: Well, that’s a good point. When you mentioned that it’s pretty academic, there’s a reason for that. The teaching, the contextualization, and the relevancy provided around why ethics matters is sort of bifurcated. So you end up with academics who probably went bachelor’s, master’s, PhD, I’m going to teach at any university, how many products and systems did they ever really fully put out that went to market that blew up in their face or to which they attached to their reputation and later down the road had to respond to a board on unintended consequences? But what we found then is that typically the better educators on the topic are people who have industry experience, which is why, what we’re seeing is a growing number of firms like Harris or now L3Harris coming in and sponsoring these sorts of things where they want to be integrated because they’ve got money behind scholarships, they’ve got money behind work programs for senior design. They’re offering jobs to a lot of people and part of the interview process is, “Hey, have you ever participated in or how many did you participate in of the ethical seminars or competitions or things like that?” So you have it being taught from a more pragmatic standpoint. They’re coming in and they’re helping build these modules. Again, they’re plug and play. We’re not talking massive numbers of hours because I know tech programs are very lengthy, 140 hours for a bachelor’s degree type thing. We’re talking five minutes, minor things on every assignment, but also these sort of extracurricular things, because otherwise you end up with what you mentioned, an academically abstract, esoteric sort of an approach, which then students can’t go apply.

 

[00:28:05] NS: Yeah. You consider those who, for example, are probably listening to this podcast may not even necessarily have a formal educational background. Maybe they’re bootcampers or self-taught learners. We run across a lot of those. You think about how are they applying these ethics and considering privacy issues. Maybe there are programs out there that I’m just not aware of, but I think there’s definitely a blind spot in our tech community that needs to be addressed.

 

[00:28:40] JL: Yeah. I mean, as this bootcamp grad that you’re describing, we didn’t talk about ethics at once in my program.

 

[00:28:46] AW: Well, again, a fantastic point between the two of you. And the key is just like corporate culture. Corporate culture either happens intentionally by the founding group or accidentally by all of the people that are a part of it that ultimately build the company. Well, what we have to realize is that everybody, whether from their background, their social economic status, their religious affiliation, if any, their previous work experience or educational construct and so forth, they all going to carry with them into a workplace a unique ethical framework. For instance, are you utilitarian? Are you about fairness? And even if you say you’re about fairness, are you about equality of opportunity or equality of outcome? Right? So if a company doesn’t intentionally sort of craft and communicate their ethical framework that they want their people to espouse to and adhere to and promote, then they kind of leave it to an accidental development.

 

[00:29:38] JL: Nashlie, is that something you deal with a lot at AWS, just managing these different individual ethical frameworks?

 

[00:29:44] NS: Yeah. Yeah. No, I think like you said, it happens everywhere. You can’t really get around that. I mean, that’s the beauty of having a diverse team too, all these different backgrounds and perspectives coming together. I know I always say this, some things are good grassroots movements, but some things have to be top-down mandated. I think ethics is one of them. Ethics, diversity, and inclusion, those are things that everyone across the company has to be in sync on no matter what your background or history is.

 

[00:30:18] JL: How much responsibility do you feel like individual developers have to inform the greater populace about any ethical concerns within their product?

 

[00:30:28] NS: I mean, I think in general, the bigger the corporation, the more revenue you’re generating, the more responsibility you have to correct things that may not even be associated with you. Right now you’re seeing a lot of people and industries involved with Black Lives Matter and openly speaking out against racism. I mean, I personally believe it ties into revenue and it helps things overall. Of course, that’s what I believe in. It’s what I’m passionate about, but not everyone believes that. And so in essence, they’re taking responsibility on things that don’t necessarily affect them directly. And I think again, the bigger the voice you have, the bigger the platform, it’s important to speak up and stand up against these things. So I’ve noticed that I have a lot more influence than I thought. And I think in general, coming through undergrad, grad school, startup life and now at a big corporation, I haven’t had the most pleasant experience every time, oftentimes being the only one, only female, maybe only person of color. And so I think it’s important to get a good support system and also know how influential you are. So recently I’ve been crafting letters and emails to certain org leaders and not just at my company, but at other places too, and I’ve had some really positive responses back, and even before, for example, COVID and before the protests and everything. And so it is definitely our responsibility as technologists, it’s our responsibility as large corporations and industries to just help things in general. We can do a lot better.

 

[MUSIC BREAK]

 

[AD]

 

[00:32:29] JL: Join over 200,000 top engineers who have used Triplebyte to find their dream job. Triplebyte shows your potential based on proven technical skills by having you take a coding quiz from a variety of tracks and helping you identify high growth opportunities and getting your foot in the door with their recommendation. It’s also free for engineers, since companies pay Triplebyte to make their hiring process more efficient. Go to triplebyte.com/devdiscuss to sign up today.

 

[00:32:54] BH: Another message from our friends at Smallstep. They published a series called “If you’re not using SSH certificates, you’re doing SSH wrong,” on their dev organization page. It’s a great overview of Zero Trust SSH user access and includes a number of links and examples to get started right away. Head over to dev.to/smallstep to follow their posts and learn more.

 

[AD END]

 

[00:33:19] JL: Now we’re going to move into a segment where we look at responses that you, the audience, have sent us to a question we made in relation to this episode.

 

[00:33:26] BH: The question we asked you all was, “Where do you think ethics in tech is falling short?” Ryan responded, “I think it is falling short on fact-checking misinformation and disinformation. Facebook has recently taken a stance to not show an alert on content that could be misleading. It may not change everyone’s opinion, but it would be a step in the right direction to have public figures be fact-checked. The anti-vaccination movement may not be a thing if fact-checking had been in place when it gained traction.” So I think misinformation and disinformation is a tech problem and that these are things that can scale up on platforms like Facebook. It just didn’t exist that you could infiltrate an entire populace with disinformation in the same way. But this is also very mainstream and that I think the average citizen understands this problem perhaps a little bit better than they understand issues of informed consent around data sources and things like that. So where do you think mainstream issues fall in terms of the conversation about ethics versus some of the issues that are only discussed among more technically informed individuals?

 

[00:34:44] AW: It’s interesting how, even in the question, you called it technical issues or tech issue. And I think you could conjecture that that’s really a societal issue that happens to be, and you could argue it’s either exacerbated or enabled, or it becomes more obvious that it occurs because not only do we see it becoming exacerbated, but we’re more aware of it because a feedback loop is created whereby we can actually collect data on and see what’s going on. But if you think about it, the fact that people chose to live in their own echo chambers is not new to society. Before platforms like Facebook and other social media things, it was not a new thing that people made their own personal choice by what medium they’re going to consume their “news” or information. Maybe I got the word from Brother Johnny down the street. You see what I’m saying? So I could have created my own echo chamber before social media platforms. It’s just that the sheer vast number of people who do it and where and how they do that it wasn’t as transparent. We didn’t have data on it to show that people have their own echo chambers. So now the fact that Facebook and others, and I’m just not trying to pick up Facebook, but these are enabling platforms to allow for that provide or almost necessitate this conversation to say, “Hey, since we’re now one of the mediums through which these echo chambers continued to be created, what do we do about it? And do we have a moral obligation to it? Do we have a virtuous obligation to it? Or is it purely utilitarian?” So that kind of goes back to your framework, your own mental model and ethical framework. And some people will say, “We absolutely have an obligation to it.” Well, if you have an obligation now, did you have the obligation before to go correct other people’s echo chambers?” Well, we did in some degree. I mean, when those echo chambers resulted in racist organizations, then we as a society confronted that. But there is this threshold at which you say, “You know what? You’re allowed to be an ostrich and stick your head in the ground and really collect bad news, only up to the point that it doesn’t have a negative effect on me maybe.” As our society has become more and more integrated, what constitutes a negative effect on me? Because feedback loops are shortened. It means that if you choose to stick your head in the ground and have your own proprietary echo chamber, and then you affect society because let’s just say you pick up bad driving habits like texting and driving or let’s say it’s even worse than that and you’re destroying property or whatever, the issue becomes, how long or short is that feedback loop? And I think that helps us whether it’s consciously or not determine the degree to which we believe we should intervene in people’s decision of where to obtain news and what news they internally filter as appropriate or not.

 

[00:37:39] JL: Let’s move on to our next writer. Almenon wrote in, “Video game companies using loop boxes. It’s basically unregulated gambling. I’m not against the idea of gambling in general, but it needs regulation to avoid companies taking advantage of people with addictive personalities.”

 

[00:37:55] NS: Again, I think that people developing technology have to be aware of policy in government and regulations and how this technology impacts people downstream. And I said it earlier, think about what is the worst that can happen with your technology before you build it? And then I also consider all of the people, many of the people in public policy and in government are not technical and they don’t really understand the technology. So how can they begin to help people if they don’t understand it? So I would encourage more people in technology to consider going into those fields. You can also have joint positions, advisor positions for public policy. You could become an expert. There are different ways to get involved and to help influence these sort of regulations. So keep that in mind.

 

[00:38:48] AW: And so it actually is a perfect example of this sort of elongated answer I gave, which I took as a philosophical question, which is okay. So it sounds like the person asking the question may or may not have a gambling concern, but maybe they have friends or family members who lost thousands of dollars to it. And so they feel that there should be a societal intervention to avoid having people who, to the question, might have a gambling addiction, becoming addicted to these things. So even though it was not a direct concern for the individual who asked the question probably, I think the policymakers are trying to consider these things.

 

[00:39:27] JL: JP wrote in, “Companies like Apple who are strongly fighting the idea of an open web and take 30% of their apps. This tech way of value based pricing is extremely capitalistic and disrespectful to the people and developers. It’s sad to see how cultish people follow tech trends like Apple because the majority of people know little to nothing about the field to have a solid opinion.”

 

[00:39:49] BH: Yeah, it’s amazing about 14 years after this idea was first put into place, Apple is finally getting legit backlash, overtaking their 30% cut within the apps, and nothing changed. It was kind of a buildup of a lot of little frustrations and then Basecamp came and really complained loudly. And now it’s more public conversation. Will that amount to anything? Basecamp ultimately got allowed into the App Store. For people that don’t know, Basecamp basically had their app not accepted into the App Store because they were accepting payments off the platform and Apple wanted them to pay 30% and they disagreed and they ultimately went back and forth and they got allowed into the App Store and cited precedent of other companies that were doing fine. But the broader question of, “Is it okay for a virtual monopoly to restrict the way developers develop on the platform like that?” So Janne writes in basically that the list of ethically distasteful companies is very long, speaking of corporate giants, and they’re often places people “dream of working at”. So that’s companies like Amazon, Facebook, Google, Netflix. You name the giant. Often these are places people dream of working at, and often they have a lot of practices which are ethically questionable. Yeah. Let’s talk about that.

 

[00:41:25] NS: So I always say if you’re going to work at a corporation like that, you got to take the good with the bad. There are so many different teams and initiatives and projects going on at these companies, Amazon of course included, and they’re all kind of decentralized and autonomous and doing their own thing. You can say, “Yeah, okay, not every manager is a great manager.” And I think that’s the case at any place. Not every employee is going to be stellar, excellent, honest, and true with the employee. And so when you have something on the magnitude of a large corporation like Amazon, you only magnify those discrepancies even more. And so you’ll get things like news articles about fulfillment center, employees being mistreated and concerns about face reg and concerns about the ring doorbell. And some are valid and some may not be as valid. A lot of it could be a lot of narratives that are pushed by PR. I’m not saying some are and some aren’t, but this is generally what happens anyway across the field. And so as an employee there, I have to reconcile with myself and say, “Okay, why am I here? Am I here to make a difference? Maybe this is just a job to me. I just need a paycheck. And I don’t care. That’s not me. I’m here to make a difference. If I wasn’t here, would these things have happened? Would this change have come?” It feels like an uphill battle sometimes, but it is important. The work that I’m doing, you have to have influence on the inside as well as on the outside. It has to come from both places. And so I consider myself as someone who is dedicated to the cause and the fight, whatever that may be. Like I said, diversity and inclusion, ethics, general tech teams improving relations amongst employees, I’m here for you. And so that’s what kind of keeps me going.

 

[00:43:27] AW: I’ll add to that a little bit, is that what we’re seeing is the rise of the ethical corporation or the ability to attract talent and the knowledge worker world is becoming more and more tied to a company’s ability to show ethical frameworks, intentionality of that, the promulgation thereof, and that people can have a positive impact because in a while, that’s all true that companies are winning the recruitment, or by doing that, there’s still going to be plenty of environments where almost back to the echo chamber conversation. I mean, if I want to work at a company that lets me get away with questionable coding practices and I don’t want my boss constantly asking me unethical question, but I might choose to go work at one of those firms formally listed.

 

[00:44:13] BH: Yeah. It’s certainly a topic of our times in the tech industry and one which is going to be highly personal, but also sort of highly collective. What impact can we make from within an organization? What impact can we make by refusing to take part in certain parts of our corporate society and what can we do in the long run? It certainly comes down to ethics and what we push as developers.

 

[00:44:43] JL: Nashlie, Abram, thank you both so much for joining us today.

 

[00:44:46] NS: Thank you. I appreciate it.

 

[00:44:56] JL: I want to thank everyone who sent in responses. For all of you listening, please be on the lookout for our next question. We’d especially love it if you would dial into our Google Voice. The number is +1 (929) 500-1513 or you can email us a voice memo so we can hear your responses in your own beautiful voices. This show is produced and mixed by Levi Sharpe. Editorial oversight by Peter Frank and Saron Yitbarek. Our theme song is by Slow Biz. If you have any questions or comments, please email [email protected] and make sure to join our DevDiscuss Twitter chats on Tuesdays at 9:00 PM Eastern. Or if you want to start your own discussion, write a post on Dev using the #discuss. Please rate and subscribe to this show on Apple Podcasts.

 

[MUSIC BREAK]

 

[AD]

 

[00:45:53] SY: Hi, there. I’m Saron Yitbarek, founder of CodeNewbie, and I’m here with my two cohosts, senior engineers at Dev, Josh Pitts.

 

[00:46:00] JP: Hello.

 

[00:46:01] SY: And Vaidehi Joshi.

 

[00:46:02] VJ: Hi everyone.

 

[00:46:02] SY: We’re bringing you DevNews. The new show for developers by developers.

 

[00:46:07] JP: Each season, we’ll cover the latest in the world of tech and speak with diverse guests from a variety of backgrounds to dig deeper in the meaty topics, like security.

 

[00:46:15] WOMAN: Actually, no. I don’t want Google to have this information. Why should they have information on me or my friends or family members, right? That information could be confidential.

 

[00:46:23] VJ: Or the pros and cons of outsourcing your site’s authentication.

 

[00:46:26] BH: Really, we need to offer a lot of solutions that users expect while hopefully simplifying the mental models.

 

[00:46:34] SY: Or the latest bug and hacks.

 

[00:46:36] VJ: So if listening to us nerd out about the tech news that’s blowing up our Slack channels sounds up your alley, check us out.

 

[00:46:42] JP: Find us wherever you get your podcasts.

 

[00:46:45] SY: Please rate and subscribe. Hope you enjoy the show.

 

[AD END]