It's a no-brainer how exciting this neurotech news is
In this episode, we talk about Nvidia new graphics cards, Samsung’s mobile memory breakthrough, and a terrible TikTok trend. And then we speak with AJ Keller, co-founder of Neurosity, about Elon Musk's Neuralink pig demo.
Saron Yitbarek is the founder of Disco, host of the CodeNewbie podcast, and co-host of the base.cs podcast.
Josh Puetz is Principal Software Engineer at Forem.
AJ Keller is the co-founder of Neurosity.
[00:00:00] LS: Hey DevNews listeners. This is Levi Sharpe, the producer of the podcast. We really want to benefit from your feedback on the show. So we’re gifting anyone who submits a review on Apple Podcasts, a pack of Dev stickers. All you have to do is leave a review on Apple Podcasts and then fill out the form in our show notes so that we have your mailing address for the stickers. Thanks for listening.
[00:00:31] SY: Welcome to DevNews, the news show for developers by developers, where we cover the latest in the world of tech. I’m Saron Yitbarek, Founder of CodeNewbie.
[00:00:40] JP: And I’m Josh Puetz, Principal Engineer at Forem.
[00:00:42] SY: This week, we’re talking about NVIDIA, Samsung’s mobile memory breakthrough, and a terrible TikTok trend.
[00:00:48] JP: And then we’ll be talking about Elon Musk’s Neuralink brain monitoring device with AJ Keller, Cofounder of Neurosity, a neurotech company.
[00:00:56] AK: Realistically speaking, you are opening up someone’s head and there’s a lot of unknowns. So it’s really about creating situations where we’re able to know what can go wrong and prepare for that.
[00:01:07] SY: This week, Apple said it’s implementing two updates to its App Review policy where “bug fixes will no longer be delayed over guideline violations, except for those related to legal issues”, meaning that incidences like Apple holding back updates to the WordPress app for a potential violation of their guidelines, which we covered in last week’s episode, shouldn’t happen anymore. Also, Apple is not only allowing appeal to their violations, but also the ability to suggest changes to its guidelines, which is pretty good news for developers.
[00:01:39] JP: Yeah. I wonder how they’ll screw this one up. I’m just joking.
[00:01:43] SY: Josh, have some faith! Come on!
[00:01:46] JP: At least something’s changing and it’s something that’s not a court order. So progress.
[00:01:50] SY: Very true. Very true.
[00:01:51] JP: Well, this week, NVIDIA introduced new graphics cards. They’re all based on their Ampere GPU architecture. The RTX 3000 series is a successor to NVIDIA’s wildly popular RTX 2000 series, which was first introduced over two years ago. So it’s been awhile.
[00:02:07] SY: That’s a while in tech. Yeah.
[00:02:09] JP: It is a long while. NVIDIA says these new cards are twice as fast as the older graphics cards they’re replacing. And while these cards are perfectly capable of running PC games up to 8K resolution, which is amazing, right? The big news here is that the chipset they’re based on, Ampere, is designed specifically to allow GPU cards to perform better in applications like AI.
[00:02:32] SY: Oh, cool!
[00:02:33] JP: Yeah. To that end, NVIDIA showed off some applications running on these new cards that utilize this improved AI performance, and they did stuff like removing backgrounds and noise from a video stream in real time.
[00:02:45] SY: Oh, wow!
[00:02:46] JP: It was amazing. No green screen, no pop filters on the microphone. It just did it as the person was streaming. Really cool.
[00:02:52] SY: So better than the virtual backgrounds we’re seeing on Zoom, I assume.
[00:02:55] JP: Absolutely. It was incredible to watch. There was a slider. He likes slid the slider and it increased the blur in the background and then swap down real time.
[00:03:02] SY: Interesting.
[00:03:04] JP: They’re not cheap, however, as you might have guessed. The prices range all the way from $700 at the low end, which is pretty expensive, to a whopping $1,500 at the high end.
[00:03:16] SY: Wow!
[00:03:17] JP: I thought what’s really interesting about this was this Ampere platform, which is designed for AI, and it reminded me that graphics cards are used for so much more than just games now. We see them used for research, AI, cryptocurrency mining.
[00:03:29] SY: It’s so powerful.
[00:03:30] JP: Absolutely incredible. The other thing that stood out to me was that they were touting that you can run games at 8K resolution, which…
[00:03:38] SY: I didn’t know that was a thing.
[00:03:40] JP: I don’t know about you. I just barely got a 4K monitor.
[00:03:44] SY: What did 8K? I feel like out of nowhere, all of a sudden, I kept hearing 8K, and I was like, “What? Did we just do 4k?”
[00:03:50] JP: Well, I think if CES would have happened this year. The ongoing joke is there’s always some disposable television technology that happens every year at the Consumer Electronics Show. I don’t know if you remember curved screens were all the rage a couple of years ago.
[00:04:03] SY: Oh, yeah.
[00:04:04] JP: I’m sure you have a drawer of 3D glasses for your 3D television that you bought.
[00:04:08] SY: Of course.
[00:04:08] JP: Yeah, Of course. So a lot of people said 8K was going to be the big breakthrough technology this year. I was just listening to a podcast and they were touting a phone that took 8K video and one of the hosts was like, “Well, come on. What are you going to do with 8K video?” Their video editor broke in and said, “Actually 8K video is really handy when you’re editing video because you can crop different sections of the video. You could zoom in. You could crop out.” And it’s like, “Oh, that makes sense.”
[00:04:40] SY: That makes sense. That makes sense.
[00:04:41] JP: Right? They were saying, “Even though we’re producing a 4K video stream, we love 8K because we can crop on down.”
[00:04:47] SY: So when they do that thing on TV shows, they’re like, “Let’s zoom into the security camera,” like that could actually potentially be a thing?
[00:04:54] JP: Well, I see it on YouTube videos all the time when they have a spicy take from the host and they like zoom in on them. Yeah.
[00:04:59] SY: Yeah. Very cool. Very cool. So in recent disheartening news, there was yet another horrendous TikTok trend called “The New Teacher Challenge”. Okay. So in this challenge, it was really bad. Yeah. This is pretty bad. So in this challenge, parents showed their children photos of disabled people and told them that it was their new teacher and recorded their reactions, which were often frightened. Melissa Blake, a disability activist who has Freeman Sheldon syndrome, which is a genetic bone and muscular disorder, was one of the people whose photo was used in the challenge. And they wrote a really powerful piece in Refinery29 about their experience. So one of the things that sticks out in the piece is not only the cruelty and insensitivity, but the fact that this kind of harassment is often not seen as a violation of rules by TikTok or many other platforms.
[00:05:50] JP: Really?
[00:05:50] SY: Yeah, which I thought was interesting.
[00:05:52] JP: But I think this accounts as bullying.
[00:05:54] SY: Melissa Blake writes really clearly where they stand on the subject. They say, “I want to be clear: I am violated. Every single time. Each photo, taunt, and cruel word is a clear violation of my dignity and worth as a human being. And every time these platforms failed to take action, they’re sending the message that this bullying is okay. So many disabled people have become inured to our appearance being mocked. That’s not something we should ever have to get used to”. So this story actually reminded me about a conversation that I had on CodeNewbie Podcast with Coraline Ada Ehmke, who’s a software developer and creator of the Contributor Covenant, which is a very influential code of conduct, designed to change the way developers and tech companies think about and write their own codes of conduct. So she helped us understand a bit of the backdrop of what inspired her to write the Contributor Covenant, which is specifically geared towards contributors to open source software.
[00:06:47] CE: Back in 2014, it was very controversial to have a code of conduct at a tech event, at a tech conference. It was a huge controversy. It was a huge fight to get tech conferences to adopt codes of conduct. And while I was seeing this unfold and I was taking part in this very heated discussions, I saw the analog to that. Not only we’re in-person events and communities needing to have a statement of shared values and a statement of like what behavior is encouraged and what behavior is discouraged and what would be tolerated and what wouldn’t be tolerated, but what about the place where we’re writing the code and not just where we’re talking about it and sharing it? And that’s when I had the idea for creating a code of conduct for open source projects and that’s how the Contributor Covenant was born.
[00:07:37] SY: Nowadays, we see code of conduct in a lot of places in tech, particularly conferences and meetups, and they’re definitely slowly becoming the norm, which is really great to see. But in a situation like TikTok, I can imagine them being hard to enforce because oftentimes when we talk about harassment, it’s direct, right? It’s like, “I’m going to harass you. I’m going to say something mean to your face.” But this challenge is this indirect harassment where if Melissa Blake hadn’t heard about it, would it be harassment? Does that make sense?
[00:08:08] JP: Yeah.
[00:08:09] SY: If it never got around to this person, then can harassment be claimed when it’s this indirect kind of weird space. You know what I mean?
[00:08:19] JP: Right. Where do you draw the line at cyberbullying? What if instead of a picture of Melissa Blake, it was a picture of one of the student’s classmates and it was getting around that this student’s photo was being shown? Some people would draw the line closer, like, “Oh, you know the person. Now it’s bullying.” But it doesn’t seem like that’s a really great test for whether bullying is happening, whether you know the person or not.
[00:08:41] SY: So that’s interesting because when you said student in the class, it made me go, “Obviously, that’s bullying.” Like that’s just kind of where my reaction was, but then I’m thinking if we get even further away from the person like a celebrity, right? Like if there was a celebrity photo going around and we’re making judgements, is that harassment? Like that doesn’t feel like harassment to me because they’re like a public figure and I feel like it just feels different, but maybe that’s harassment too and I’m just a jerk. I don’t know.
[00:09:04] JP: Maybe in front of the cam also is like it’s to imagine a situation where you have two students in a classroom or two students in a school, and one is showing a picture of a student around and they all know each other, it’s a couple of kids. And I think that’s easier for us to process sometimes. And the idea that there’s a person out in the world, Melissa Blake or someone else, and her pictures being shown hundreds of thousands of times, it seems like we get more than about 10 people. It gets really hard for us to like even fathom how many people are out there. So once it goes viral, you’re just like, “Oh, it went viral. Oh, well it’s too bad.” I would hope one of the things eventually that gets taken up in the conversation is not just so much the impact of social media on our democracy, which of course is a huge deal, but the impact of social media on our public personas and how that can bounce back at us and not putting the onus on a particular person saying, “You should be spending less time on Facebook,” but saying, “Hey, Facebook, you could actually damage people’s reputations if you don’t do something about this.”
[00:10:08] SY: Exactly. And that’s kind of where my mind was thinking too, because I would definitely argue that Melissa Blake is being harassed, but it’s a different form of harassment that I think that we’re used to. Right? I think the way that we define harassment is more direct and it just feels like a different form. And so when we talk about creating safe spaces for our platforms, when we talk about our meetups, our events, our software, when we talk about making sure that everyone feels safe and welcome, and we have these codes of conduct, how do we make it enforceable for this shape of harassment? Does this fall under what we’re already talking about or do we need to make adjustments? I’m curious to hear what the tech community thinks of that and how we might be able to navigate situations like this.
[00:10:53] JP: I think in general the tech community has traditionally been very bad at addressing these concerns.
[00:10:59] SY: We’re getting better though, right? Right. Maybe.
[00:11:01] JP: We’re getting there.
[00:11:02] SY: Yeah. We’re on our way. We’re on our way.
[00:11:21] SY: Heroku is a platform that enables developers to build, run, and operate applications entirely in the cloud. It streamlines development, allowing you to focus on your code, not your infrastructure. Also, you’re not locked into the service. So why not start building your apps today with Heroku?
[00:11:38] JP: Vonage is a cloud communications platform that allows developers to integrate voice, video, and messaging into their applications using their communication APIs. So whether you want to build video calls into your app, create a Facebook bot or build applications on top of programmable phone numbers, Vonage has you covered. Sign up for an account at developer.nexmo.com and use the promo code DEVTO10, that’s D-E-V-T-O-1-0, by October 10th for 10 euros of free credit.
[00:12:12] JP: Well, next up, Samsung says its latest mobile memory is a production breakthrough. The company says they’ve created a memory chip, which might actually be a true breakthrough in chip technology. Their 16-gigabit LPDDR5 mobile RAM chips are said to be the first mobile memory chips made using extreme ultraviolet lithography. Now Saron, I’ve got a quick question for you. It’s a true or false. True or false, did you think like I thought tiny little machines make chips and memory? I blame it on those Intel ads in the mid-90s with people in bunny suits. Right?
[00:12:51] SY: Yeah.
[00:12:52] JP: I thought they were actually making the chips.
[00:12:53] SY: Not our fault.
[00:12:54] JP: Right. They have their hands in the machine. It looks kind of crazy. Well, that’s actually not how chips are made at all. So it turns out since about 1977, there’s been this process called lithography, which is used to make chips, and it’s a lot like making a photograph. You shine a light through an optical mask and it hits chemicals on a surface. The light reacts to those chemicals and then they’re kind of etched and washed off. It leaves etchings into the chip, and then you can take a material and kind of coat the chip with that and they only go into the etchings, and ta-da, you have very, very tiny little channels.
[00:13:27] SY: Interesting. Yeah. I definitely would not have guessed that light made chips.
[00:13:31] JP: Right. Yeah. My guess was not going to be like, it’s like a Polaroid. Yeah. So that is super cool, but there’s been a problem we’ve been getting closer and closer to. And this lends into, if you’ve ever heard about chip making processes, when they talk about a 12-nanometer process or a 10-nanometer process, there’s a physical limitation on to how small you can make these etchings based on regular light and its wavelength. So the cool thing about extreme ultraviolet lithography is that they use ultraviolet light. It has a much smaller wavelength than visible light and you can have much smaller etching patterns on the chips.
[00:14:07] SY: Oh, nice! Very cool.
[00:14:08] JP: So Samsung says this process allows for chips that could be even thinner as well as hold more memory and be even faster than traditional memory chips. They started making desktop memory chips with this process last May, but this is the first time anyone’s done this with a mobile phone memory chip.
[00:14:23] SY: So does that basically mean our films will be even faster?
[00:14:26] JP: Even faster, even smaller, potentially thinner. I think those are all on the table.
[00:14:31] SY: Very cool.
[00:14:32] JP: The other thing that’s brought to mind was some of the struggles Intel has been having with their miniaturization processes. They’ve been trying to get to a 10-nanometer process for quite a long time. This is one of the factors being able to control this ultraviolet light.
[00:14:45] SY: Good for them. Good for them. So in a strange potentially dystopian sci-fi-ish news, Elon Musk presented a demo of his Neuralink rain monitoring device on three pigs. The company was launched in 2016, and Musk has described the Neuralink device as a Fitbit in your skull with tiny wires, which I don’t know about you, but that does not sound appealing.
[00:15:10] JP: I don’t like the season of Black Mirror at all.
[00:15:12] SY: Yes. Because the Fitbit is like pretty big. Isn’t it?
[00:15:16] JP: You could also take it off.
[00:15:17] SY: That’s true, which he hopes will help monitor and solve neurological problems including depression, memory loss, and brain damage. In the demonstration of the device, Musk was able to show that the device was indeed receiving signals from the pig’s brain showing spikes in activity and playing a sound whenever the pig came into contact with something with its snout. And coming up next, to shed some light on this news, we’ll be joined by AJ Keller, Cofounder of Neurosity, a neurotech company after this.
[00:16:05] SY: Heroku is a platform that enables developers to build, run, and operate applications entirely in the cloud. It streamlines development, allowing you to focus on your code, not your infrastructure. Also, you’re not locked into the service. So why not start building your apps today with Heroku?
[00:16:22] JP: Vonage is a cloud communications platform that allows developers to integrate voice, video, and messaging into their applications using their communication APIs. So whether you want to build video calls into your app, create a Facebook bot or build applications on top of programmable phone numbers, Vonage has you covered. Sign up for an account at developer.nexmo.com and use the promo code DEVTO10, that’s D-E-V-T-O-1-0, by October 10th for 10 euros of free credit.
[00:16:55] SY: Joining us today to talk about the Neuralink news, we have AJ Keller, Cofounder of Neurosity. Thank you so much for joining us.
[00:17:02] AK: My pleasure. Thank you for having me.
[00:17:04] SY: So tell us a little bit about your background and what Neurosity is.
[00:17:09] AK: I have been working in the field of neurotech for about five years now. Before that, I was a robotics engineer for Boeing, helping them automate the assembly of aircraft.
[00:17:19] SY: Wow!
[00:17:19] AK: So they can’t hire any more people. But if you use robots, you can build them faster, general idea.
[00:17:25] SY: Oh, wow! That’s huge.
[00:17:27] AK: I loved it, but I knew that if we could connect our minds to the robots that were coming out onto the floor, the factory world would be much different and much more efficient. So I started researching it and understanding that there was some technology that existed that just was waiting to be commercialized. And that’s really where I started diving in.
[00:17:44] SY: What kind of technology?
[00:17:46] AK: So Neurosity right now is helping programmers get into the zone much faster than before, but leveraging the fact that your brain is producing brainwaves that are associated with focus, it’s called gamma oscillations, these are associated with the tension. We monitor that as your level increases. We know that you’re getting more focused. If there’s a drop in it, we know that you’re losing focus. Then we play that back with a Spotify integration that we just have that allows your music to essentially keep being tuned so that we are increasing your focus. If your focus drops, then we know that the features in this currently playing song need to be tuned. And if your focus increases, we know we need to keep playing music like that. So it’s sort of like a DJ can tell if the audience is like losing their energy and not dancing versus like having a great time dancing.
[00:18:37] SY: Interesting. Yeah.
[00:18:39] AK: So we leverage this technology called electroencephalography, which is the observation of the electrical activity inside your head from outside of your head.
[00:18:50] JP: So Neurosity is monitoring brainwaves from the outside of the brain obviously. We saw this week that Elon Musk and his company demoed a pig with an implant showing some of that kind of telemetry from inside of a pig’s brain. I was wondering if you could tell us what your first impressions of that Neuralink demo were.
[00:19:08] AK: Wow. Amazing. I think it’s a really cool step forward because it’s such an integrated system. So the reason why we don’t all have the ability to talk to our computers is simply the fact that there’s no hardware that allows it. So I always say it’s a hardware problem that is the blocker for that and so seeing hardware starting to be integrated and the speed at which they’re doing it was really exciting for me because I think it’s the final frontier.
[00:19:39] SY: So an expert in the neurotech field, does it feel as big of a deal to you as it does to us? Because I think we were like, “Whoa! What in the world is happening? What is going on?” But as someone who’s a little bit closer to the subject and has been working in the field, did it seem that out of this world to you?
[00:19:57] AK: Great question. I think it’s an inevitable thing that this type of technology happens and there’s another fantastic company called Paradromics. It’s done 64,000 channels of recording, right? And so that’s a magnitude of a couple of power of twos bigger than what they’re doing.
[00:20:15] SY: Wow!
[00:20:15] AK: And I think the really important thing that Neuralink has done is produce it on like a really easy to install or possibly easy to install, but repeatable process. If they can really get to a point where you’re able to install it into a head and run that, I think it’s an amazing thing. So for me, as someone who’s in the field, it’s an amazing feat and I think it’s great that they’re announcing what they’re doing and being public about it. So yeah, I think it was a huge deal. Every person was watching it and I have people who are in the space now thinking about how they can start getting involved with it.
[00:20:55] JP: I’m curious, what kind of engineering and development efforts go into making a product either like this implant or like Neurosity’s?
[00:21:02] AK: Yeah. Let’s just focus on the Neuralink for a little bit. It really starts with like mice and then it works its way up to pig and monkeys, primates, right? And then it goes into human trials. What this company is doing, Elon Musk’s company and Paradromics, is that they actually have to go through this rigorous FDA process. And sometimes you can get like a breakthrough device, which is what Neuralink announced they got. So FDA is essentially saying, “This is a new device that can potentially help people who have an ailment that is not treatable by any other means and you have the potential to do that. So we’re going to help you fast track this through so that we can start getting this into people who just have no other options.” It’s like if you invented a cancer treatment, they’re going to say, “People who have this today, they just don’t have this treatment available. Let’s fast track this through.” And all of a sudden, they’re helping and supporting you through that because it’s a new and novel process. And so that is a lot of effort and it takes hiring people who have done it before and are familiar with how to do bone implants and things like that that have gone through this process. So it’s a very long and meticulous process. And I also would have to say that on the engineering side, what Paradromics and Neuralink are doing really well is that they’re creating integrated circuits. So they’re creating something called ASICs, which are Application-Specific Integrated Circuits, which are very expensive to make. These are like millions of dollars to make, one of these, and then as you scale up, it goes to cents, but this is like making their own custom silicon chips for this, which I think is really the biggest technological feat that we’re seeing here before our eyes.
[00:22:51] SY: So Musk has some pretty big goals for this device. It is not just about brain function monitoring. It’s also eventually to fix some neurological problems, as you know, we talked about briefly, and then it has some lighter goals of summoning a Tesla or controlling computer games with your mind, which are a little bit more on the fun side. Do you think these things can actually be made possible with this device or are these kind of too lofty of goals?
[00:23:16] AK: It’s well understood some of the technology that’s needed to restore walking for someone who has lost the ability to send neural signals from their brain through their spine, to their legs, for example, and people have done this in labs where they have in the head recordings where they’re then able to bypass the spinal cord and send electrical impulses to your muscle fibers, which then allow you to walk. They’ve done this with like an arm and different things like that. So it goes to say that you could have two neural links in there that are monitoring the part of your brain, that are well understood, that are controlling your legs, and that are bypassing it and going to little electrical stimulators that are on your legs that allow you to walk, totally I think a great use for this. Some of the other things like depression and anxiety are going to be tougher to treat because those aren’t as localized. As far as science knows, those aren’t localized to a specific area, like there are literally wires that are going from your toe all the way up your spinal cord and go to your brain and that’s like mapped out and 99% of people it’s the same. And so it’s really well understood. Another thing that’s really well understood is the ear. So hearing, right? There’s something called the cochlear implant, which restores hearing to people who have lost hearing by applying electrical stimulations to this nerve that then you can essentially hear. So there are things that his technology will be able to sort of connect the dots for. But one of the biggest things that I see that they need to create an electro-property around is the operating system that’s actually running on your head. So right now there’s using Bluetooth to stream to another device, which is then doing the edge processing. So if you’re building something that’s helping you walk and you walk too far away from your phone, then all of a sudden you’re just stuck there.
[00:25:09] JP: So we can’t really talk about devices being implanted in people without talking about some of the ethical and medical and privacy concerns. I was wondering, could you tell us about what some of the considerations are with a device like this becoming widely available and being installed in people?
[00:25:23] AK: Yeah. Absolutely it takes a privacy first approach where you’re treating this just like how you treat your login to Google, to look at your photos, right? This is private, sensitive information that you don’t want others having access to. One example that I’ve used is if you have a device that’s able to detect epileptic seizures, you don’t want an insurance company, a potential insurer to know that of you and to raise your rates, right? So it’s important that these are your data that is protected. And at Neurosity, we’ve always taken that privacy first approach where only you have access to your data, through like a claiming device process. Our device is the only one on the market that requires a login. So if you were in that room with that pig and you had something called a Bluetooth sniffer, you would actually be able to have picked up on that data that the device is sending out and you would be able to monitor that. So that’s like another reason why the operating system is so important is because all of that data crunching has to actually happen on the device or else you’re essentially leaking your neural information, which can then be snooped and used against you.
[00:26:34] SY: So I can imagine a number of things that can go wrong with having a device implanted in our brains. There’s probably a ton of nightmares, scenarios that can happen. So I’m wondering, what are some of the realistic concerns that we should think about when it comes to just health and just how it actually works?
[00:26:51] AK: Realistically speaking, you are opening up someone’s head and there’s a lot of unknowns. So it’s really about creating situations where we’re able to know what can go wrong and prepare for that. If you look at how a rocket is launched, there’s a lot of different scenarios that are planned and you can understand when things start to go wrong. So as long as you take the approach of safety first, and you are using like a gated process along the way, where you say, “Okay, it’s working in mice, it’s working in pigs, it’s working to help people who are paraplegic,” and then we can start working on the larger installs, but it’s really important that you take a slow approach to it. Nothing good happens fast here.
[00:27:29] JP: You had mentioned some of the data and privacy concerns with a device like this. And I’m wondering, given how much of our data is already bought and sold and used by companies, do you think giving so much power to a company to peek inside the aspects of our brain is a good idea at all?
[00:27:44] AK: I think the device will be made anyway, and it’s important that we have companies that are ethical. So at Neurosity, we put privacy as one of our core values as well with quality. Right? So building a company around the idea of privacy and quality allows you to not make outlandish claims that are not backed by science, and at the same time, make sure that when you design the device, right? So I can tell that Neuralink is not as focused on privacy right now. It’s not one of their cornerstones because they’re sending that data all over the room when they’re using these Neuralink devices. Right? So when you start to think about, “All right, if there’s a company that’s going to do this, I really hope that privacy is one of their cornerstones.” And that’s why like, with our device, the notion device, you’re able to actually have all of your sensitive data computed locally on the device and then all the devices doing is using HTTPS to send your focus score to a server. But you have to authenticate with that server. There has to be a login, a password. There’s a process where you’re granting permissions for an application to use this type of brain activity. And you really think through how you treat the features on your smartphone, giving access to the microphone, giving access to the camera, and you’re sort of giving access to different parts of your head in the same exact way.
[00:29:07] JP: It really puts it in perspective.
[00:29:11] SY: I think what strikes me about all this is the inevitability of it. We watch so many movies and TV shows that cover the future, Black Mirror, obviously, comes to mind where we see all these scenarios where there are implants in people, but it’s always felt so far away. And I think for me watching the videos of Neuralink and the demonstration made me think like, “Wow, we’re really not that far off from the things that we see on TV.” I like to think that’s a good thing, if we are actually able to help with spinal issues and with increased brain functionality, like I think those are all really great things, but it’s just amazing to see in four years since its creation, the fact that we’re already getting signals from pigs on when their snouts touch the ground. That’s just really mind blowing to me.
[00:30:01] AK: Yeah, absolutely. I was just reading an article about this by my friend, Avery Bedows, on the idea that where they started in 2016 with this Tim Urban Wait But Why post on Neuralink. It was like, “We want to create this thing that mergers with AI.” And then in 2018, they unveil, “Okay, we’re going to have multiple sensors that are streaming information into your head.” And now we have one sensor in your head that’s targeted just for paraplegics. So you can kind of see the walking back as they turn this dream of sci-fi into reality.
[00:30:42] JP: The scope narrows.
[00:30:43] AK: Yeah, the scope narrows. And I think that’s really important and it’s inevitable as you’re taking something from this idea and you’re actually turning it into engineering and you’re working on the execution of that. So you start getting more and more focused and eventually it’ll sort of hourglass where it gets tighter and tighter and then expands out. And I think it’s really important that we just keep assuming that the next time that they talk about this technology, it’s going to be even more scaled back in their focus and to a point where they start rolling it out. And then years after that, we’ll start getting into the cool applications.
[00:31:17] JP: It sounds very much like how the rollout of Tesla’s autopilot for their cars work.
[00:31:23] SY: Good example, yeah.
[00:31:24] JP: Yeah, there’s a lot of grand claims that it narrows down and now you’re seeing it widened back out and now it’s actually in production.
[00:31:29] AK: Exactly.
[00:31:31] SY: Yeah. It started with lane control and then it starts going out from there and now you’re getting summoned and you start having more and more features like that.
[00:31:39] JP: Is there anything else that you’d like to touch on that we haven’t talked about?
[00:31:42] AK: I think one fun thing that I was thinking about, I was like, “When can a web developer actually start working on this?” Right? I mean, when is the part of your bootcamp when you’re learning how to become a developer devoted to neural programming. When is that?
[00:31:59] SY: Do you think that’ll be a thing?
[00:32:51] SY: Very cool.
[00:32:52] AK: My question for you guys is, what’s the first app that you want in your Neuralink?
[00:32:59] JP: That’s a good question. This is really selfish. I wish it could suppress me from swearing when I’m coding. I involuntarily swear a lot when I’m coding. I know a lot of developers have this issue. And if I could just like have a little safe for work filter I turned off. That’s kind of scary to actually think about now. The Black Mirror script is writing itself. But I think that would be kind of interesting.
[00:33:23] AK: That’s a good one.
[00:33:24] SY: I think for me, I would want, there are so many situations where I’m talking to my husband and I just get like really overwhelmed that I just can’t put into words what I’m trying to say. There are so many times where I’m like, “Oh, I just wish you could just read my brain right now.” You know what I mean? I don’t have to say things. You just know how I’m feeling without me having to explain it. And I think with the Wait… What is it called? Is the Wait But Why? Is that what the blogpost is called?
[00:33:48] AK: Yeah, the Tim Urban one.
[00:33:49] SY: Yeah. Yeah. Yeah. I think when we first read about Neuralink there, I think that was one of the examples that are talked about having to just like really telecommunicate, which I thought was really cool, and I would be totally down for some telecommunication.
[00:34:01] AK: Absolutely. Yeah. I really want like a language for the brain that allows you to program with it.
[00:34:08] SY: Oh, cool!
[00:34:08] AK: And I know that’s like kind of farfetched, but at some point we’re going to have to figure out like a really good way, like how Swift was created to help Python programming. There’s going to be a point where there’s new languages that are created just to even start programming. Like what is the IDE of the brain?
[00:34:26] SY: BrainScript.
[00:34:27] AK: Yes. So many different things that need to happen. So I’m just really excited overall about the Neuralink post and sharing that news because it just brings so many people into the space and that’s really what we need. We just need bright people, excited people about the future coming into the space and really bringing new ideas and having just this new final frontier of programming and exploration and self-improvement in the end.
[00:34:53] SY: Well, thank you so much for joining us, AJ.
[00:34:55] JP: Thank you.
[00:34:55] AK: Thank you.
[00:35:06] Thank you for listening to DevNews. This show is produced and mixed by Levi Sharpe. Editorial oversight by Vaidehi Joshi, Peter Frank, Ben Halpern, and Jess Lee. Our theme music is by Dan Powell. If you have any questions or comments, dial into our Google Voice at +1 (929) 500-1513. Or email us at [email protected] Please rate and subscribe to this show on Apple Podcasts.