GitHub opens up its inner workings, a 17-year-old opens up Twitter, TikTok opens up its algorithm, and GPT-3 opens up more AI possibilities.
In this episode, we cover GitHub's new public roadmap, the hacker behind the recent Twitter hack that took control of a bunch of high profile accounts, and TikTok's plans to disclose their algorithm. We also speak with Pedro Cruz, developer advocate at IBM teaching, who is teaching developers how to use artificial intelligence and extended reality. He shares some reflections on whether OpenAI's powerful autocomplete program, GPT-3 is all it's cracked up to be, and more!
Saron Yitbarek is the founder of CodeNewbie, and host of the CodeNewbie podcast and co-host of the base.cs podcast.
Josh Puetz is Principal Software Engineer at Forem.
Pedro Cruz is a developer advocate at IBM, teaching developers how to use artificial intelligence and extended reality.
[00:00:02] LS: Hey DevNews listeners. This is Levi Sharpe, the producer of the podcast. We really want to benefit from your feedback on the show. So we’re gifting anyone who submits a review on Apple Podcasts, a pack of Dev stickers. All you have to do is leave a review on Apple Podcasts and then fill out the form in our show notes so that we have your mailing address for the stickers. Thanks for listening.
[00:00:33] SY: Welcome to Dev News, the news show for developers by developers, where we cover the latest in the world of tech. I’m Saron Yitbarek, Founder of Code Newbie.
[00:00:42] JP: And I’m Josh Pitts, Principal Engineer at Dev.
[00:00:44] SY: This week, we’re talking about GitHub’s roadmap, the Twitter hack, TikTok’s algorithm, and OpenAI’s GPT-3.
[00:00:52] JP: Then we’ll be speaking with our guest, Pedro Cruz, a Developer Advocate at IBM teaching developers how to use artificial intelligence and extended reality, but whether OpenAI’s GPT-3 is all it’s cracked up to be.
[00:01:05] PC: But as this comes to the hands of the public and other people, sometimes that research does not take in consideration how the rest of the world feels, and that’s the important part here.
[00:01:16] SY: Let’s start off by talking about the GitHub Public Roadmap. So GitHub has now released a public roadmap to give developers more visibility on what kinds of features GitHub is working on and going to be shipping in the future. In the last year alone, GitHub has released more than 200 new features, such as actions, packages, and Codespaces. And the announcement released on July 28th, the company said the roadmap isn’t exhaustive, but will include most of their product plans and that they hope this new transparency into their projects and timelines will give teams the ability to better plan for how they could use GitHub in the future, as well as give feedback early on into what the company is building. So what are your thoughts on this public roadmap?
[00:02:00] JP: This is really interesting to me. I think it speaks to the position GitHub has in the industry that they’re comfortable, just kind of laying it out on the line and saying, “Here’s what we’re planning on working on.” Most companies I think would be very reluctant to publicly announce we’re going to be working on these features in the next quarter, two quarters, three quarters away, because the fear is always that those features are going to slip and you’re not going to live up to your promises. They’re kind of making a promise here and they sort of have to stand by and it’s going to be very visible if it slips at all.
[00:02:30] SY: Yeah. It really feels like a show of confidence. I mean, to be fair, GitHub doesn’t have many competitors, but it is saying, “If you are a competitor, if you’re considering competing, we don’t really care for you to know what we’re up to because we’re going to beat you anyway.” You know what I mean? Like it just kind of feels like they don’t really care about the other people who might be trying to catch up to them or trying to displace them. They’re just kind of doing their own thing and really confident, and I guess that it makes sense and that I think that increased transparency with your users’ builds trust, it builds a good relationship. It probably increases loyalty. So I think in all those ways it can be really useful. I’m just thinking if I had a product and I had all these plans and ideas for the future, I feel like I just wouldn’t want to tell anyone. It just feels like secret. You know what I mean? It just feels like things are like I want to keep to myself. I don’t know. I’m surprised by things like that.
[00:03:19] JP: I think it’s a different conversation given that GitHub works with the open source community so much that you see this.
[00:03:26] SY: That’s true.
[00:03:26] JP: You see this kind of thing much more open with open source communities and projects where they’ll announce, “We’re working on feature XYZ. We plan to bring feature XYZ to the markets. We’re working on this compatibility. Languages do this all the time. Here’s the next version.” And a lot of times it’s because they have these proposals open for contribution and comment by the community. Well, GitHub, they’re a for profit company. I think they’re taking a lot of cues from the open source community, and frankly, a lot of the expectations the open source community might have around that kind of transparency.
[00:03:58] SY: Yeah. I think that’s fair. And the other example I can think of that does an open roadmap, a public roadmap like that is Trello, which is interesting because as far as I know, I’m pretty sure Trello is not open source at all. I think they’re just private all the way.
[00:04:11] JP: They are. Yeah.
[00:04:12] SY: Even they still have a public roadmap, which again I find surprising and I think you’re right. I think that for GitHub, it makes sense because they are open source. They’re a big proponent of open source. There’s a lot of aspects of their work that is very much on the open source side of things. I mean, of course, you have the ability to change your mind and to kind of adjust things as you go to. It’s a working, living roadmap, right? It’s not a permanent definitive roadmap, but still, it feels like you’ve added a level of accountability that I don’t know if I’d be comfortable committing to personally speaking.
[00:04:42] JP: I could definitely understand that. One thing that stood out to me about this public roadmap is I don’t, for a second, believe that this is their internal roadmap as well. It’s very clear that this is all of the items on this roadmap are from one account user. It’s very clear that this is a public document and I don’t want to say it’s advertising. However, I don’t think anybody should believe this is the same roadmap internally that GitHub is making their commits to and making their discussions on. This is a public facing document.
[00:05:16] SY: Yeah, and because the other thing, and I don’t think that’s necessarily a bad thing, but I’m thinking, for example, like security patches, right? Like you probably don’t want to publicize, like, “Hey, on this day, we plan on fixing this security issue.” You know what I mean? Like just for simple things like that, you probably don’t want to make certain things public just for keeping users safe, keeping data safe. You probably don’t want to share absolutely literally everything that you’re possibly working on.
[00:05:41] JP: Right. I know for my personal use case, there are several features on this roadmap that I was curious about and now that there’s actually some sort of a date around them, I can do planning as to when to maybe support those features or maybe when to count on them being widely available. Speaking of which, is there anything that jumped out on this list that you’re excited about?
[00:06:00] SY: Yeah. There are a couple things. One is I’m very, very curious about GitHub discussions and I was trying to get a sense of what exactly it’s going to look like. It feels like a forum. It says that there’s going to be easy ways to try to have conversations and to organize knowledge bases and to ask questions and get questions answered, which to me sounds like some type of forum situation, but I thought that was really interesting. I was like, “Oh, wow! We’re going to try and create more community ask features to GitHub.” I’m kind of wondering what that’s going to look like and how it’s going to play out in real life.
[00:06:32] JP: Yeah.
[00:06:33] SY: What about you?
[00:06:34] JP: Well, I have been sanely excited for Codespaces. That is a GitHub’s way of running code in the browser, having it just hooked right from your PR or repository. I think that’s a really exciting space.
[00:06:48] SY: Yeah.
[00:06:50] JP: The thing I took away from this roadmap is that it’s in beta right now, but they say that it’ll be in general availability in Q4, October to December of 2020.
[00:06:58] SY: Okay.
[00:06:58] JP: So that’s interesting.
[00:07:00] SY: Not that far away.
[00:07:00] JP: Yeah. The other thing I thought was really, really interesting is a much smaller scope. A lot of the items on this roadmap are very large grain. This was a really small one, it’s titled “Pull Request Revisions and Improved Workflow”. The story they kind of laid out was you open a poll request a draft state, you work on it, and you mark it as ready for review, and then some reviewers come in and take a look at it and they’re adding a feature where you’re going to be able to pause the review. Maybe you’ll fix some items. Maybe you’ll change your approach, but you can basically pause the reviews and bounce back and forth between the draft and the ready for review state. I know that would help me out a whole lot.
[00:07:39] SY: That would be amazing because it’s just so awkward when you start getting feedback and then you’re like, “Wait, wait, no, stop.” And you have to go back and like…
[00:07:45] JP: Never mind. I need some time for this.
[00:07:47] SY: Yeah. Yeah. That’s always like such an awkward moment, but yeah, being able to just kind of pause it right in the app would be super useful. I love that. I really, really like that. The other thing that I liked, the other feature I had is number 77, which is page’s private access, having the ability to make pages internal, internal facing for enterprise clients. Yeah. I thought that was interesting because it allows you to build kind of your own like wiki, your own internet and a degree. Right? You have your own like private knowledge base, which I think it’d be really useful and just thinking about how we organize information and how we organize documentation. I feel like that could offer a different level of how to document, how to communicate, and how to create that knowledge base within an organization within a team. So I thought that was really cool. And one more feature I want to mention briefly is that they’re going to change the default branch name for new repos from master to main.
[00:08:41] JP: This is well, well overdue. I’m very excited to see them support this natively. There’s been a lot of conversation and a lot of tutorials about how to do this on your own repo, but having them build it into the platform just makes sense.
[00:08:56] SY: It was huge.
[00:08:57] JP: It’s going to be one less argument or excuse not to do it. You really won’t have an excuse not to do it anymore.
[00:09:03] SY: Yes, absolutely. Yeah. I thought that was really cool. That made me feel good to say. All right, what else we got?
[00:09:09] JP: So remember back in mid-July, when there was that massive Twitter hack that led to a bunch of high profile accounts from people like Barack Obama, Kanye West, and Elon Musk posting to their followers with the following message, “I am giving back to the community. All Bitcoins sent to the address below will be sent back doubled. If you send $1,000, I will send back $2,000. Only doing this for 30 minutes.”
[00:09:33] SY: Yikes.
[00:09:33] JP: Yeah. I mean, I’ll admit, for a good 10 seconds I was fooled. So when this happened, our Slack at Dev lit up with all of us commenting on this news. We’re speculating, “How could it have happened? Was it somewhat internal to Twitter?” The security team at Twitter must be flipping out completely. Well, it turns out the alleged mastermind behind this hack is a 17-year-old kid from Florida named Graham Ivan Clark. And even though he’s a minor, he’s being tried as an adult. He faces 30 felony charges. Two other folks, a 19-year-old named Mason John Sheppard and a 22-year-old named Nima Fazeli are also being charged for aiding Clark. Now what’s interesting about this hack is that it wasn’t an exploit of a technical vulnerability or anything like that. Clark allegedly convinced one of Twitter’s employees that he worked in their technology department and that he needed credentials to access their customer service portal. From that portal, prosecutors say he was able to break into about 130 Twitter accounts. But apparently, Clark and his friends were kind of sloppy covering their tracks because they left hints about their real identities and how they hid the $180,000 they’re accused of raking in. A little more on Clark, apparently he told his friends that he had an unhappy home life and he’s been scamming folks since the age of 10, when he would cheat people out of money on Minecraft by pretending to sell them in-game items and handles that he just wouldn’t make good on. He was also implicated in the theft of $856,000 worth of cryptocurrency in 2019. Although for that, he was never charged for the time he was a minor.
[00:11:11] SY: Okay. So he’s like an OG hacker then? Like this is not new territory.
[00:11:16] JP: Right. It sounds like he has been, let's say grifting and doing social engineering for quite a while.
[00:11:23] SY: So it’s interesting because I think that passing around and kind of sharing credentials is generally like annoying thing for developers, right? I mean, there’s definitely tools around how to do it, but I feel like there’s nothing totally seamless and really like clean and great about it about handling credentials. And I’m kind of thinking like, “Is that basically what happened in this scenario?” The developer is like, “Oh, I’ve got to share these credentials. All right. Let me just send them real quick.” And that was it, without there kind of being any more thought into like, “Wait a minute, who is this person and how do I share it?” You know what I mean? Just kind of going through that process of it, which is kind of this hassle that you just kind of want to get off your plate and just didn’t really think twice about.
[00:12:06] JP: I mean, I’ve definitely been in the situation where it might not be a username and password, but it might be like an API key or a server address, something internally you’re configuring for your code, and you pop it into your setup instructions internally or a coworker is like, “Hey, what’s that API key I used for our logging service?” And you just shoot it over to them really quick.
[00:12:27] SY: Right.
[00:12:27] JP: I could definitely see if I worked to a large organization, I might not know every developer I’m talking to by name. I could see how this sort of thing might occur.
[00:12:37] SY: Yeah. It doesn’t seem totally adequate. I mean, it’s disappointing, but it is interesting to me that it wasn’t anything technical. It wasn’t a vulnerability. There’s no patch for this. You know what I mean? This could easily happen again. And it was just this 17-year-old who was able to socially engineer and basically trick someone. I think that’s just wow. That’s incredible.
[00:12:59] JP: I know a lot of the initial conversation was everyone should change their password and Twitter wasn’t saying this. This was what developers on Twitter were saying. You should consider changing your password, enable two-factor authentication, if you have it, all the good account security hygiene items that most of us should do and it seems like very few of us actually do practice, but none of that would help with this at all. To use a horror movie term, “The call was coming from inside the building,” like someone was in their system. I kind of feel like this is a security engineers’ absolute worst nightmare. You could write documents, you could build security systems as much as you want, but at the end of the day, it’s the people that are the weakest link in any security system.
[00:13:46] SY: Yeah. It’s like, “How do you prevent this from happening again?” Like the next time someone asks you for your credentials, do you do a little bit of research on that person, make sure they work there? Not only do they work there, but they’re the type of employee who should be having access to that thing that you were giving credentials for. Right? Because that’s also a separate thing. Do we need to now build that into our process or what’s the right way to move forward? What do you learn from this?
[00:14:11] JP: Some of the articles I’ve read from a security perspective have talked about shared credentials are really the smoking gun here. You have to think about like, “What kind of system is so critical and has so much access and yet you share credentials to it?” That seems really not great.
[00:14:30] SY: That’s true.
[00:14:31] JP: You know? You can’t really like work out assigning roles to people if you just have a bunch of credentials that you’re sharing.
[00:14:36] SY: Very good point.
[00:14:38] JP: Yeah.
[00:14:38] SY: So don’t share credentials is maybe the lesson to be learned here.
[00:14:41] JP: One comment I read talked about how in the world of high profile Twitter accounts, Donald Trump’s account is very, very high profile and was not subject to this hack. There’s some speculation that accounts like Jack Dorsey, the CEO of Twitter, and Donald Trump have even higher level protections on them, and that’s why they weren’t subject to this particular hack.
[00:15:04] SY: Interesting. I never even thought of that. I never considered that different accounts would have different levels of security.
[00:15:11] JP: I think that’s interesting. I’m really curious to see if we will hear any kind of post-mortem from Twitter based on the company’s actions in the past. I think it could go either way. They’ve been hauled in front of Congress before for their accountability and transparency. So maybe they’ll publish a post-mortem and try to explain what will happen. On the other hand, a lot of times in security work, you don’t want to publish a post-mortem.
[00:15:35] SY: That’s true.
[00:15:37] JP: Because you’re basically telling people how to hack your system.
[00:15:40] SY: Yes, that is true.
[00:15:56] SY: Heroku is a platform that enables developers to build, run, and operate applications entirely in the cloud. It streamlines development, allowing you to focus on your code, not your infrastructure. Also, you’re not locked into the service. So why not start building your apps today with Heroku?
[00:16:13] Vonage is a cloud communications platform that allows developers to integrate voice, video, and messaging into their applications, using their communication APIs. They’ve just launched Vonage Voyagers, an exciting new program, which rewards engaged community members with training, mentorship, and awesome limited edition swag. If you apply before August 15, you can make it into the September crew. Find out more at nexmo.dev/voyagers-devnews.
[00:16:46] SY: So there’s been a lot of news about TikTok the past couple of months.
[00:16:50] JP: Understatement of the Year.
[00:16:52] SY: Yeah. So many headlines. So a new banner alert feature on iOS 14 that shows when apps on the device are pasting from its clipboard expose that TikTok is one of the biggest offenders when it comes to reading user clipboards and then there’s the matter of government spying. Secretary of State Mike Pompeo said that the United States is considering banning Chinese social media apps suggesting that some of these apps like TikTok share user’s information with the Chinese government. Amazon has sent an email out to its employees saying that they had to remove the TikTok app entirely from any mobile device that access Amazon email, which they then rescinded. Microsoft spoke with the US government and is considering acquiring TikTok planning to either make a deal or not by September 15th. And now to make things even more interesting, TikTok has announced that they will be opening up their algorithm. TikTok CEO, Kevin Mayer, announced in a blog post that, “With our success comes responsibility and accountability. The entire industry has received scrutiny and rightly so. Yet, we have received even more scrutiny due to the company’s Chinese origins. We accept this and embrace the challenge of giving peace of mind through greater transparency and accountability.” He also said that he believes, “All companies should disclose their algorithms, moderation policies, and data flows to regulators as well.” So what are your thoughts on this idea of TikTok just being very kind and just opening up its algorithm?
[00:18:23] JP: Suspiciously kind. Wouldn’t you say?
[00:18:26] SY: Yeah, very suspiciously. Yeah.
[00:18:28] JP: We just had previously pointed out how big of a deal it was that GitHub posted a public roadmap and here’s TikTok saying like, “Roadmap? Forget it. Here’s how we do our entire algorithm.”
[00:18:39] SY: Yeah.
[00:18:40] JP: It’s a huge deal. The timing is so suspicious. TikTok posted this right before the big judicial antitrust panel, which hauled in CEOs from Facebook, Google, and Amazon to testify and you have to think that timing is not coincidental.
[00:18:58] SY: What are you thinking? Are you thinking that they kind of want to get ahead of the antitrust panels so maybe they’re not on that panel in the future? Do you feel like that’s what they were kind of getting at?
[00:19:07] JP: Yeah. I feel like it might be one way to get ahead of that panel. Also, TikTok is a Chinese company and a lot of the criticism that they are getting is suspicion from the US government and US companies about what is happening to the data of US users. Is it being held in China? Is it being somehow analyzed and inspected for data harvesting? And this is one potential way to get those criticisms out of the way. We should also point out too that also running in the background are these calls from the US government from TikTok to divest their US operations. I feel like TikTok is saying like, “No, here’s our algorithm. Here’s what we do with the data. Here’s how we determine who sees what.” It’s a really effective way for them to prove, “We’re not doing anything shady with this data.”
[00:19:53] SY: But then the question is, “Is this the algorithm?” Who’s to say that this is the thing, like the culprit that they’re actually sharing and it’s not a decoy algorithm that might be actually like hiding or just keeping it to themselves the thing that’s actually causing all this damage? How do we trust their transparency and their openness?
[00:20:15] JP: That’s a really good point, and I don’t know so much at the outcry over TikTok is the algorithm. People are not complaining.
[00:20:22] SY: That’s true.
[00:20:22] JP: I’m not getting the most relevant dance TikToks on my stream.
[00:20:27] SY: My stream is not relevant enough. Yeah.
[00:20:29] JP: Right? They’re complaining that I don’t know what’s happening with my data and I don’t understand why so much data is being held. You mentioned that they got in trouble for copying clipboard contents and sending that data back to their servers. I don’t know that that would show up in the algorithm at all.
[00:20:46] SY: Yeah. So maybe this whole, “We’re going to be transparent,” is an entire decoy just overall of them saying, “Look, we’re being open, we’re being transparent,” hoping that people would go, “Okay, cool. We have the algorithm. We’re good.” Not realizing that the algorithm is actually kind of irrelevant. That’s not what we’re scared of. We’re scared of what they’re doing with the data and where it’s going and where it’s being stored and who has access to it, which you’re right, would not show up in the algorithm.
[00:21:10] JP: I think strategy wise, this is also very interesting because you have not seen this pledge to open their algorithm from other companies that rely on algorithms. Notably Google and Facebook, neither one of them has jumped to the challenge of opening up their algorithms and being transparent about it. I almost wonder if this is TikTok working with what they’ve got in terms of ammunition to counter a lot of these claims that are going back and forth.
[00:21:37] SY: The another thing that I was thinking is kind of, what happens if TikTok does open it up? Does that mean that a competitor can kind of come in and use that algorithm and kind of make it its own version and then potentially compete with TikTok? Do you feel like that would be the potential, I don’t know, like the downfall of TikTok and we’d move away from it or do you feel like it has some staying power and has some permanence?
[00:21:59] JP: I don’t know how much of TikTok’s popularity is, frankly, their algorithm. I have a 12-year-old daughter and I asked her about TikTok’s algorithm and she really did not care. She’s much more interested that she could get content from the creators that she follows and that she’s interested in it. So I think there’s some huge network effects that TikTok has that we’ve seen with social networks in the past that don’t necessarily rely upon their algorithm, and I think that’s where their strength of frankly.
[00:22:29] SY: Right. Right.
[00:22:30] JP: Now let’s talk about GPT-3. A company called OpenAI has come out with an autocomplete program. It is selling to customers as a private beta that might have huge implications for the future of artificial intelligence. The program is called GPT-3, which stands for Generative Pretrained Transformer. The big thing about this program is not only its scale, but the variety of auto-complete functions it can do. Things people in the AI community have created with GPT-3’s commercial API include a chat bot that lets you talk to historical figures, cogeneration based on text descriptions, guitar tabs for playing music and auto completing images. You give it half of an image and it like comes up with the bottom half of the image. GPT-3 was not specifically trained to do any of these things out of the box, but because of the scale of the model, users only need to input a few examples of what they want and then the program quickly upon it. In the AI world, this is called “Few-Shot Learning”.
[00:23:34] SY: We’ll get into more of the implications of GPT-3 after this.
[00:23:52] SY: Over nine million apps have been created and ran on Heroku’s cloud service. It scales and grows with you from free apps to enterprise apps, supporting things at enterprise scale. It also manages over two million data stores and makes over 175 add-on services available. Not only that. It allows you to use the most popular open source languages to build web apps. And while you’re checking out their services, make sure to check out their podcast, Code[ish], that explores code, technologies, tools, tips, and the life of the developer. Find it at heroku.com/podcast.
[00:24:29] VJ: Vonage is a cloud communications platform that allows developers to integrate voice, video, and messaging into their applications, using their communication APIs. They’ve just launched Vonage Voyagers, an exciting new program, which rewards engaged community members with training, mentorship, and awesome limited edition swag. If you apply before August 15, you can make it into the September crew. Find out more at nexmo.dev/voyagers-devnews.
[00:25:02] SY: Here with us is Pedro Cruz, Developer Advocate at IBM, teaching developers how to use AI and XR. Thank you so much for being here.
[00:25:10] PC: Thank you so much for having me.
[00:25:11] SY: So can you explain a little bit more about how GPT-3 has been taught to auto-complete to the degree that it does?
[00:25:18] PC: So GPT-3 is an evolution of GPT-2 and GPT-1. And so the parameters to be able to complete text, complete code, I’m starting to experiment it to generate scripts with GPT-2 specifically and GPT-3 is still under post beta. So with hundreds of millions of parameters in our previous research, it’s getting better and better.
[00:25:44] JP: Pedro, is this as big of a deal as some of the articles are making it sound? I saw tons of Twitter posts showing incredible auto-complete websites and music. Is it really that big of a deal?
[00:25:56] PC: It is a big deal, maybe over-hyped at this current moment. I think that there’s a lot of potential because we’re able to automate many things. For me, I’m not a really good writer. I prefer to create videos. That’s my first way of communicating or talking or speaking. So writing for me is difficult. So now imagine that I can a bot where I can tell my ideas, I can start writing and it can create a full draft of my blog posts. So in that sense, for me, it’s amazing. For other people who are coders, I saw a demo on Twitter of someone generating the Google website. Right? I believe it was using React and you just explained what it was supposed to be, how the website was supposed to look and then generated this code. So I think that, yes, this is the beginning. At least for me, I didn’t know of how accessible this was and how now we have this tool to be more creative. I think that that’s going to be the first thing that we’re going to see a lot of creative work. But I think it’s going to be a little bit more time until we see it in action in our chat bots and in Siri and Alexa, et cetera.
[00:27:07] SY: Yeah, I did a demo for GPT-3 not that long ago. And I did the auto-complete writing where you give it like a first sentence, then it auto-completes it for you. And actually I didn’t do it. My husband did it and he didn’t tell me it was GPT-3. He was like, “What do you think of this story?” And so the beginning sentence was something like, “There’s a dog in the road and there’s a car like coming down the street, barreling at it, and the dog like freezes and the GBT-3 like fills in the rest of the story.” And it was this like pretty disturbing graphic story of like, I won’t go into it, but basically the car runs over or almost runs over the dog. It was like pretty violent. I was like, “Who’s this psychotic writer who wrote this? awful, creepy story?” And then Rob was like, “Oh, it’s the machine.” And I was like, “Whoa! That’s crazy.” So tell me about this idea of AI truly mimicking the human mind. Is this kind of the first step towards that? Or are we still many, many years, decades away from that happening?
[00:28:08] PC: I would say the act of creating artificial intelligence is trying to replicate or design or be inspired by the human mind. So we can recognize objects, cats, dogs, food, thousands if not millions of things, right? So computers are able to do this before right in the ’70s when the AI research, there’s like a lot of research being done and the foundational research of today’s artificial intelligence had started. Back then to do facial recognition and detection of text, numbers, that had to be done manually through features. Now basically we were able to automate that process. We have models that you just show a hundred images of a cat, a hundred images of a dog and now with the ability to share models, we’re able to use transfer learning and start to build upon it. So our artificial intelligence or the base models are getting smarter and smarter and I think one day it will surpass human intelligence. So I think that we’re on our way. I think we’re still many years ahead, but it’s going to be very interesting to see more stories that are generated by AI, more music that’s generated as well, poems, scripts. I was thinking even of imagine comedy sketches, imagine asking a group of theater students to give them write a text that’s generated by AI and they have to act it out. So I think it’s going to unlock a lot of creative aspects. But in addition, it’s going to help in the scientific sector as well. So I think it’s going to affect all our lives and it will keep becoming more intelligent and help us in our daily lives.
[00:29:53] SY: So I want to talk about GPT-3 making AI more general and more accessible. Do you feel like this is kind of the beginning of anyone being able to use AI?
[00:30:05] PC: So I would argue that is already possible. Today, software developers, they can use AI systems without even creating the algorithms. Right? So most big companies that companies have AI API is IBM Watson, and you can interact with Watson through an interface that doesn’t require coding. So there’s even a platform called “Machine Learning for Kids,” like five years old and how do we use artificial intelligence systems, how to train the difference between cats and dogs, how they can collect images. So I would say the era of artificial intelligence and machine learning has already begun and we’re starting to see more and more in our classrooms. So the challenge is that there’s a need for education. Many people, even I thought two years ago, when I started diving deep into artificial intelligence, that it was too complicated. I had to go and get a PhD. I learned that at Hackathon at the Call for Code Hackathon back in 2018. That wasn’t true. That was an API. You could create something useful with machine learning. So it’s definitely possible today, and yes, GBT-3 is going to unlock a lot more impossibilities for us.
[00:31:20] JP: So a lot of machine learning and AI tech like facial recognition gets a lot of criticism because of how biases can get baked into the system and it lead to some pretty grotesque outcomes. I wanted to ask, what are some of the worries you have if or when biases creep into these sophisticated auto-complete programs that people will be making?
[00:31:40] PC: Yeah, definitely a great question. So many companies are starting to remove those capabilities of facial recognition. Now that’s something that researchers can do on your own. It says you can create any type of website. Machine learning researchers can use any type of data set that they please. IBM has actually moved those features. I think that is an excellent move by companies. We still have facial recognition everywhere. When you’re zooming into someone’s face on your camera, you want to focus on something best using facial recognition. Many people might not be aware of that. So facial recognition is not inherently bad. It’s more when you have racial bias. So I I’ve experienced this. I am a person of color, Latino, living in Puerto Rico, and I saw that many online platforms that were to predict right your age, your gender. They weren’t usually correct. In that time, it wouldn’t be a big issue because I would think this would be research. But as this comes to the hands of the public and other people, sometimes that research does not take in considerations how the rest of the world feels, and that’s the important part here. I think making AI accessible and many companies and organizations have this goal of democratizing AI. So that, in my opinion, could help remove these biases or at least mitigate.
[00:33:11] SY: Thank you so much.
[00:33:12] JP: Thank you.
[00:33:13] PC: Thank you so much.
[00:33:25] SY: Thank you for listening to Dev News. This show is produced and mixed by Levi Sharpe. Editorial oversight by Vaidehi Joshi, Peter Frank, Ben Halpern, and Jess Lee. Our theme music is by Dan Powell. If you have any questions or comments, dial into our Google Voice at +1 (929) 500-1513. Or email us at [email protected] Please rate and subscribe to this show on Apple Podcasts.