This week, Jeremi and Zachary discuss the future and potential of Artificial Intelligence and our Democracy with Aurna Mukherjee.
Zachary sets the scene with his poem, Wires in Ancient Walls are like Grape Vines in Cell Towers.
Aurna Mukherjee is a sophomore at Liberal Arts and Science Academy (LASA) High School, graduating in 2024. She is part of the Women+ in Computer Science club at the school, and is interested in ethics and Artificial Intelligence.
- Aurna MukherjeeSophomore at Liberal Arts and Science Academy, Part of the Women+ in Computer Science Club
This is democracy, a podcast about the people of the United States, a podcast about citizenship, about engaging with politics and the world around you. A podcast about educating yourself on today’s important issues. And how to have a voice in what happens next. Welcome to our new episode of this is democracy.
Uh, we have a very special episode this week, I guess every week is a special episode, but this week is extra special because we are following up on a goal. We set ourselves weeks ago. Uh, which is a goal to bring more voices from young people on this podcast. Uh, we’re, we’re fortunate to have all kinds of thoughtful, experienced insightful people week after week, but we particularly want to bring out more young voices.
Cause young voices are not heard enough in our society. And we know that young voices are not only have a lot to contribute, but they are the future of our society. And that future is becoming the present very quickly. We are joined today in that spirit, uh, with a really impressive, uh, young person. Uh, this is Orna.
She’s a sophomore at liberal arts and science academy in, uh, Austin, Texas. She’ll be graduating in 2024. And she’s part of a really interesting organization women plus in computer science club at the school. And she’s interested in done a lot of thinking and the work on ethics and artificial intelligence.
And that’s going to be our topic today. Uh, how do young people think about. Uh, world we’re moving into where artificial intelligence will be more and more a part of things we do. It’s already becoming that way. And how do we think about ethics and democracy in the context of a world where artificial intelligence is more common, uh, or an a thank you for joining us today.
Thank you for having me. Before our discussion with Orna. Of course, we have a poem from another young person, uh, Mr. Zachary Surrey, uh, what’s another loss of student and another Lassa student. Okay. This might, this might be the maximum number of LASA students that we can ever have too many. Okay. Zachary, what’s the title of your poem today?
Okay. Wires in ancient walls are like vines on cell towers. Let’s hear it. When you met me in the marketplace and we climbed to the top of the cathedral and looked down at the people in the square, it seemed so foreign for the automobile to be. Weaving its way between the people and the fruit stands in this sense, I guess, a papaya to can be new eaten sideways on a low garden fence, letting it soak into your tongue.
When the world around has folded itself into a Cubist landscape, maternity losing its dimensionality. It’s regional. When you met me in the marketplace and climb the church tower with me and trace the outline of the electricity in neat little rows of wires entering the garden and the graveyard through cutouts in the ancient walls.
Didn’t it seem so foreign. So frustratingly benign, the guidebook you held wrinkled in your hand and read from pointing into the hill. In this sense, I guess the grapes on the vine can take on the air of centuries though. Born a week ago, ripening only yesterday, they are anxious and letting them burst in your cheeks.
When you have seen the words folds in and out meaningless digits through invisible wires, this is somehow majestic as if eating from divine holding a cell phone is. Sacrilege. We have broken time. You and I, when we went to climbing up the spiral staircase and breathless at the top, looked down over the marketplace, we have broken time.
It folds in and out and over itself again, until we can no longer separate the wires from the grapevine. Until the grapevine becomes the algorithm and the digits in the cell phone sends upwards at a satellite are saccharin and ripe for fermentation. That’s a lot going on in that poem. Zachary, what is your poem about?
My phone was really about the ways that technology in particular computer algorithms and sort of nascent AI technology is warping our sense of time and changes how we understand the world around us and in a somewhat artificial way makes us sort of embrace everything that is modern and new when around us, the things that we now consider our case.
Or obsolete are, are, are still blooming and blossoming and, and still very important. So it may make us forget some of the important things around us or not. How did you get interested, uh, in artificial intelligence and these kinds of things? Sure. So before I went into high school, I had a summer camp where I learned about machine learning and this obviously tastes ties into AI.
So at that point I didn’t really know much about it, but I knew that I really wanted to learn more about it because I had kind of seen through examples, the effect that it could have and what I could design with it. Not limited, only to graphs, but also to doing statistical analysis. So all of that seemed really interesting to me.
And, um, how did you then proceed from that, that early interest? Because for some people artificial intelligence, isn’t something they encountered until far later in their academic careers. Yeah, definitely. So after I learned about how interested I was in AI, I decided to take AP computer science and freshmen.
Which also allowed me to do some more statistical analysis and use some of the methods that I’ve learned. And I’ve just been working on it in my own, on my own since then. And what is women plus women plus includes LGBTQ communities. So, uh, we just try to make it inclusive for anyone that wants to learn program.
That makes sense. So clearly, uh, from that title, there’s a very strong, ethical set of commitments you have in the way you approach, uh, artificial intelligence. Uh, what do you see as, uh, as the role for ethics in a world where technology allows people like you to take on potentially powerful roles you wouldn’t take on otherwise.
Sure. So technology and AI in particular is a very powerful tool, which I think has the potential to make life much easier and more convenient. But after all, AI is an algorithm that’s being fed in data, which doesn’t really have any sort of moral compass. So this is where we can see bias in AI. And that’s where the issues with ethics.
So facial recognition has some issues with AI or some issues with ethics associated with it. Um, facial recognition software is essentially takes in data that doesn’t really incorporate data points from darker skinned individuals. It has mainly lighter skinned individuals being accounted for. So when we see facial recognition software, Discriminating against individuals with darker skin.
That’s the reason why, and that’s kind of an example of how AI is biased or leads to. And do you think we do enough to talk about these, these ethical issues, uh, in artificial intelligence, I know as someone who attends a, a high school as you do that is very focused on technology and science among other things that there’s a lot of excitement around computer science and, and AI technology.
Do you think we talk about the ethics of, of, of such technology enough? So I think that there’s definitely. A lot of conversation or conversation happening about AI and regulating AI because of these issues. And I think there’s definitely more, I guess, knowledge or acknowledgement about this with the younger audience.
I do think that there isn’t as much being done to regulate it as there should be, but I think that there’s definitely. An interest in AI and an acknowledgement of some of the biases that we see associated with AI. And just give us a, uh, a better sense of what you see as some of the dangers. Um, there, there are all sorts of people out there saying they’re dystopian possibilities.
That one day the robots will take over. Is that what concerns you or what really concerns you? Yeah, so I think the things that definitely concern me is just how. AI has resulted in some unethical outcomes and the impact that this has on young people in our generation. So just to give a couple of examples on this, I think that this ties into political views and college applications, uh, is that related to the question you were asking?
Because absolutely. So in terms of political decisions, We see that social media has a large impact on basically our understanding of political decisions. So we become more extreme in our political views because of the impact social media has on us. And that’s because algorithms send us more material that plays to our biases.
Yeah. So algorithms will target content based on whatever we’ve seen before. So that makes the Democrats more left facing Republicans more right facing, and this just further divides Republicans and Democrats, it makes the partisan divide become larger. And what about college applications? So in terms of college applications, this isn’t talked about as much, but I think it’s really important because AI is being used to essentially decide.
Uh, not entirely, but there’s a factor which is being used to take application personality tests. And this is used as a criteria for admission to a college. So this is definitely should be regulated because I think that if college applications are using personality tests, they’re only going to take a certain number of factors.
But as humans, we have probably 50 60 factors that define us. That’s what makes us unique. So if an algorithm is only looking at a couple of these factors, it’s not going to take in everything that would make us a suitable candidate for call. Some would argue though, that in both of these cases on, uh, uh, with regard to the algorithms that send us information and the ways in which one could use AI to create a personality test for college applications, that in fact we do that anyway, AI is simply making it more consistent that we, we tend to surround ourselves with people and views that confirm.
What we already think. Uh, people tend to live around people who look like them think like themselves, uh, and that colleges were doing this all along. Trying to imagine the personality behind the, the application, looking for the right quote unquote character. Um, so why is this so bad if, if AI’s just doing it more consistently?
So the issue with this is essentially that whatever we have in AI, generalizes, The application that we’re looking at. So if there’s certain criteria that works for 90% of the population that may not work for the rest of the 10%, and it doesn’t take into account, many of the factors that define us as people.
So while colleges may not look at everything that defines us a human looking at our application is going to have a much better idea. Of who we are and what makes us suitable for college then AI will. So you believe the subjectivity of human, uh, observation is still better than the, uh, algorithmic consistency of artificial intelligence.
Yeah, that’s right. And, and how should government then if you’re right. And if we agree and I think most human beings probably would agree with that. Uh, how should we then. What role should government play in, in making certain that we don’t go down the road of algorithmic consistency instead of human choice and judgment.
So I think there just needs to be a focus on regulating AI, because right now in college applications, for example, there’s just a lack of regulation in terms of. You know, algorithmic bias. So I think that in order for the government to be involved, there should be protocols in place to make sure that there’s a certain number of factors being considered that isn’t due to an algorithm we should have in place.
People that are actually reviewing our applications and the government should play a role in that. That makes sense. Uh, just taking a little bit of a step back here, could you maybe explain to some of our listeners who might not be as familiar with the topic as you are, how ubiquitous these technologies have become too?
To what extent are they actually being used on a widespread basis for, for, for making decisions like college applications and, and, and on local communities. Sure. So, fortunately right now, these AI driven tools are more recent and they’re not being used on a very large-scale basis, but I think that if we don’t regulate it at this point, it will become used more frequently.
And that’s when we won’t be able to make a change very quickly. And that is the thing that I want to avoid. Which is why I think that it’s important to recognize the issues associated with AI. And I’m not saying that we shouldn’t use AI. I think that we should, but we also should make sure that people are viewing our applications.
And we’re being careful about this because in something like a college application, it’s important to get feedback from people that are actually reviewing our application. Yes. And I think it also, it relates to the point about algorithms providing us self confirming news, right. That probably we should have some limit on the ways in which companies like Facebook and others are able to bombard us with information that plays to our biases.
This has become a real political problem, as you said so well, Lorna in, in encouraging extremism without our even realizing and in some cases. So, so that leads to a question that comes up almost every week on our podcast. If there’s a role for government to play in protecting and supporting and regulating a behavior within a democracy, what is that role and most important in this case, or know who should do it?
Because I don’t think you’re saying that, uh, elected officials should Willy nilly have control over. Where AI is used and how it’s used, that could be even worse than what you’re describing. Right. So, so, so, so who should do this and under what criteria? Yeah, I think that’s a great question. I definitely think that the youth should play a larger role in.
Kind of deciding how AI should be used, because when we’re really thinking about all of this, it’s the youth that’s getting impacted by college applications or political decisions. We’re the ones that hurt we’re going to face the impacts of AI. So I think that we should have a say in how AI is being used.
Um, and I think another thing that just relates to this and stop me if I’m going off track here, but I think that. Part of the reason that there’s such an issue with regulating AI is that many large corporations have a financial incentive to have AI because that leads to, for example, targeted content which allows find, or companies to profit off of.
People clicking ads, essentially. So I think that if the youth has a larger say in how AI is being used and can control this more, we’ll have less cases where this arrived. And do you think it’s also to some extent, maybe a lack of understanding by an older generation of politicians, of policymakers, of what I really, what AI really is and its implications for our society?
Yeah. I definitely think that that’s part of it. I think that in our government right now, I’m not sure that everyone is very familiar with AI and the implications of AI. I don’t know if people in our government really know how to code. So I think that part of that ties into just a lack of understanding from our government officials, which is why I think that youth needs to get more involved.
But how do you think we can get policymakers interested in this technology? And on the other hand, get people who are interested in technology like yourself, interested in government. How can we make the investment that our society has made over the past few years in technology like AI, something that we can translate into effective government regulation and participation in policymaking.
That’s also another great question. I think, although there people think that there isn’t much of an association. There’s a clear-cut association with AI and politics, which I talked about a little bit, but the issue is that if we don’t think about AI in the context of politics, they will eventually become connected.
And they are because AI is leading us to become more extreme interviews. As I said before, and this has an implication in our political decisions. Although people don’t think of it as connected we should because they’re completely intertwined or intertwined. And that’s the connection with AI. Yes. And so do you think that we should, therefore, um, and maybe this is what you’re already doing, that we should combine education in computer science with education and ethics that they should go hand in hand rather than.
Um, the way they often are treated now as separate entities, some students study philosophy and ethics, some students study computer science. I hear you saying that one needs to study both to be a, an effective contributor in our democracy. Is that where you’re going with this? That’s exactly where I’m going with this.
I think that you can’t learn computer science or artificial intelligence. Without also knowing the ethics related to this, because if you don’t know that, then how are you supposed to create algorithms that are not biased that are taking enough data points to create a trained algorithm that is not biased and taking in all the information, you know, So then as a young computer scientist, someone who’s aware of the very important ethical issues that AI poses to our society and is interested in investigating them.
Do you then see yourself as someone who not only has an important role to play in, in maybe in creating new technologies that will save lives and change the way that our world works, but also in, in, in. Working on social issues and, and political issues and, and things that will affect everybody and that everybody can understand.
Yeah. I think that, again, these issues go hand in hand. So I think that one cannot exist without the other, which is why we need to think about both of these in the context of another. Did that answer your question? Yes. Yes. I guess though it poses a really, um, interesting dilemma in a sense or an a right, because while in the abstract, it sounds lovely.
To say that one should be educated in ethics and one should be educated in computer science. These are both difficult and often technical fields, right? To be educated and ethics. One has to read philosophy. We’ve had philosophers on the podcast and there’s a lot of work that has to go into that, especially if you’re going to do it seriously.
And of course in computer science is a lot of work that goes into that. Um, there’s only a limited amount of time. And the limited number of resources. Right. And so how do you think this model can work? What would you like to see as a leading thinker among your generation? What would you like to see us creating in our educational and political institutions to allow people to do some of, both of these things?
How give us some concrete imagined details you’d like to see and how we would do this. Right. So. It’s impossible really to, or it’s not impossible, but if you’re someone that really enjoys computer science, you may not be super interested in the philosophy of ethics or, or something related to that. But it’s not too difficult, too.
Learn about the past issues associated with AI so that you know how to implement an algorithm that doesn’t repeat the same mistakes. So I think what I’m calling for really is for people to learn about some of the issues. Like for example, as I talked about facial recognition, um, political decisions, even the supplies are things like self-driving cars.
I have had issues with AI and ethics associated with them. But learning about examples in the past where AI has not served as it’s supposed to has had bias in, it allows us to make algorithms that are more ethical. So, so what I’m hearing you saying is that you’re proposing ethics and an examination of ethics and AI as a sort of debugging process, uh, in the sort of computer science method and a historical process.
I think. Yes, that’s what I’m promoting. I love it. I love it. This is, uh, certainly one of our themes week in and week out that a historical, uh, interrogation of topics that often are distant from the way we think about history can provide us with, uh, the perspective we need to make better decisions. I think you’ve given us terrific examples of this Orna.
Do you find that this is possible in your education now? And I don’t just mean at school. I mean, in, in other settings, is it possible to do this, to bring the history together with the technical work? I think so. I mean, before I write any algorithm or I try to do anything related to AI, I sometimes just search up.
What are the issues that have been done with this? Or what if the issues that are associated with this. Happened in the past, just to understand that the history of whatever I’m writing before I actually do it. And I think that honestly allows me to become much more well-informed before doing something.
So this isn’t even a question of, you know, how ethical is AI or anything like that. It’s just being knowledgeable before you write a piece of. Right. You’re you’re learning about the context. Uh, and you’re learning about the effects often, unintended of other similar activities in the past to inform yourself about the possible implications of what you’re doing today.
I think that’s, that’s so wise and echoes often what we say about for many, many other topics. Um, do you think other young people think this way? I think not everyone thinks this way, but part of that is due to. Uh, lack of understanding for a lot of young groups. I think that some people don’t understand the issues associated with AI.
I mean, I know that before going into high school, I didn’t know what AI really was entirely or what issues associated were associated with it before. So I think that once we educate the youth about this, people will become interested and there will be. Talk about this even more than there is now.
Zachary. Do you agree with that? I think so. I think I’ve always been a little frustrated that some people who are very interested in technology is young people. See it more as a sort of form of engineering. If you will then as a sort of. And then there’s a process that’s deeply tied to, to social and political discussions.
And I think what Orner has said is a very important step in, in, in, in moving towards an education system that emphasizes the social and ethical responsibility. That’s most lacking in older generations today. Yeah. I guess though, what, what makes me. A little skeptical of what is a very compelling and, um, utopian almost, uh, or at least hopeful perspective is the evidence of so many of the people who have created the companies that use AI today.
And they’re seeming lack of attention to these issues. And they were young when they did this was so Facebook, Google, and these are not bad companies, but I think they are some of the examples of the misuses of AI that, that owners, uh, dissecting so brilliantly, right? These companies, uh, were led by young people, created by young people, just like you and Orna, Zachary, and.
It often seems to me that both the desire to accomplish something in the AI space, the desire to be good at that, and the desire to make money and maybe become famous, often override the ethical instincts. And I’m wondering what the brakes are on that happening for another generation? Well, I think part of what.
This moment is so important is that we’re at the sweet spot when we’ve seen a lot of the failings, a lot of the social ethical, moral failings of the, of the first wave of, of, of social media companies and, and, and AI innovators. And so I think that there’s an opportunity now to really get young people involved in discussing not just the, the, the spaces for innovation, but for, for.
An ethical and moral, uh, form of entrepreneurship that doesn’t just emphasize money. And I’m not saying, and I don’t think orange is either that this is inevitable, but it’s definitely something that we need to work towards and is possible in the space we’re in today. Do you, do you have those conversations now on her?
I think that these are definitely conversations. Happened in my school, um, all around me. So I think that this is just something to be aware of. That’s great. It’s really, really positive to hear that these conversations are occurring. I, I think these are conversations that often don’t happen in the business space because of the, it’s not that people are bad people in that space, but because the pressure of stockholders and the pressure of a competitive market often don’t leave enough space.
Uh, for that, I guess, Orna for our final question. And the one that I think is the most important question in some ways, uh, if some of this is happening, if we’re having these discussions about ethics and AI today, at least among young people, that’s what you’ve convinced me of. And, and I think you’ve convinced our listeners, what more should we be doing?
Uh, how can we, uh, as. The AI pioneers and hopefully ethical pioneers like yourself are coming of age. What does our society need to do to help you to continue having these conversations and not fall into perhaps the pattern of neglecting the ethics as the opportunities and pressures of AI continue to grow.
So I think part of this. Just learning about some of the issues associated with AI. As I talked about, another thing also is just not falling, uh, you know, trap to some of these companies that we were talking about earlier, like Facebook, Amazon, these companies target content. And while sometimes the feeds that are like the posts that we look at are targeted towards our own.
If we questioned content, when we see it come up, if we fact check what we learn, then this allows us to be exposed to new types of information, which also allows us to understand the effect that AI has on us and how our world could be different. If we weren’t persuaded by some of this targeted content that came our way.
I love it. Uh, more informed consumers. And I think this is definitely a great point you make, because I’ve seen this myself, younger users of technology understand better what to believe and what not to believe, or at least they know not to believe everything they see in the way that sometimes I’m amazed older users think because it’s in front of them on the screen, it must be true.
Zachary. What are your closing thoughts on this? D w Oren has given us a really a, an argument for education and argument for in a sense it’s a theme every week for us, a broad liberal arts education being essential to democracy, thinking and fact checking, uh, what else? I think that these are all very key points about how we can bring technology and ethics to the center of our debates.
But I also think that there still is space for an older generation of policymakers and even entrepreneurs to make a difference in this space. I think that there’s a lot of things that can be done now and in the coming years to make a really big difference in the future in the way that. Integrated into our society.
So I don’t think we should dismiss, uh, the current policymakers and business leaders necessarily, but we have to make. At the status quo is not going to be enough. Right. And I think what, what use equity and particularly what owner has done today is illustrate to us both in style and substance. Why it’s so important to have more young people in these conversations.
I think one of the big takeaways and owners of this at the top of our discussion is that we need to have more young voices. That are better informed and in some ways more ethically motivated because they’ve seen what’s happened in recent years. Uh, and those voices, even if they’re less experienced, uh, have a lot to offer, I, I think you’ve illustrated that in every way today or in, uh, it was.
Pleasure to have you on. Uh, and I hope we’ll be able to talk to you some more and I w I hope you continue to do the ethical as well as the technical work that you’re doing. Zachary. Thank you also for your contribution, your poetry, especially on a topic like this illustrates the interesting. Between the literary and humanistic imagination and the technological whole engineering, uh, that we’re talking about.
And of course, thanks to our loyal listeners for joining us for this week of this is democracy. This podcast is produced by the liberal arts, its development studio and the college of liberal arts at the university of Texas at Austin. The music in this episode was written and recorded by Harris. Codine stay tuned for a new episode every week.
You can find this is democracy on apple podcasts, Spotify, and Stitcher. See you next time. Uh,