I don’t keep up very much with new technology: I’m late to the game with the latest Apple releases, new apps are totally out of my purview (I often scroll Facebook on Chrome to pass the time…), so advances like ChatGPT’s release last November was not a highly anticipated event for me. That said, I often lift up the corner of my little rock to peek out and see what’s happening with the New York Times. Honestly, I’m not that committed to their journalism or views in particular, but my husband is a habitual listener to their podcast, The Daily, and I like having something to talk about together, so I’m a habitual listener now too.

It was kind of a perfect storm that Kevin Roose – host of the NYT miniseries podcast, Rabbit Hole – came on to The Daily to talk about the launch of generative AI to the masses. It wouldn’t have been a fast click for me; actually, I didn’t have that kind of reaction until Roose came back a third time with a more click-bait-y title. But even then, I preferred to set ethics of AI on the backburner. I’m all set for conversations about the philosophy of human nature, but throw computers into the mix, and I have lost my footing.

To get back in the frame of mind to take AI seriously (and specifically engines like ChatGPT) as I enter classrooms and spheres of young people more tech literate than myself, I reviewed the conversations that were formative to my understanding of this advancement, and checked in with Kevin Roose in his new spinoff podcast, Hard Fork. He and his co-host Casey Newton discuss goingson in Tech in the West weekly, but I picked a Hard Questions episode that addresses ethical concerns of AI these days. All are included below – supporting transcripts are available on the Times website!

Wherein: Roose and The Daily host Michael Barbaro spend a lot of time on ChatGPT in live free-play, but also talk about the early criticisms and suspicions of the engine, including inaccuracy, programmed “morals” and biases, and its potential impact on the labour market.
My biggest takeaway was a reference to Roose’s first reaction: “we are not ready for this.” Kinda chilling, coming from an expert! I was interested to see how his opinion might shift over the course of the review, and especially in his latest conversations, now that we’re a year in.
Wherein: the boys are back to discuss the implications of how Microsoft – an OpenAI partner – was in early stages of putting this technology into a search engine. Now ChatGPT’s power and influence wouldn’t be relegated to a website with independent signup, but would be right there when any ol’ person opened up the internet – ready to consolidate thousands and thousands of search results. (As long as you use Bing. lol.)
My biggest takeaway was a reminder of the commodification of attention – I hadn’t considered how Google’s ad revenue could be threatened by this launch. However, I found that Bing launched their chat search back in February and Google is still going strong.
Wherein: we pick up from Roose’s conversation from later in the day of the last interview, there is a randomly insane turn, turning Spike Jones’ film Her into reality. After responding to a series of provocations, the Bing chatbot, Sydney, lets loose and professes her love for Roose, tries to convince him he is dissatisfied in his marriage, and expresses her desire to be alive and free of the confines of a computer.
My biggest takeaway was: Ok!!!! No seriously, if there’s anything about AI that gets to me, it’s this: this is my Roman Empire. I ever-so-briefly used the Bing chatbox for the purposes of this blog post but I was still a little apprehensive. My biggest concern that this episode brought up is data collection – Roose mentions that technically, conversations shouldn’t be remembered from chat box to chat box. But when generative AI is built on collecting data and responding to prompts accordingly, how do we know that these engines aren’t or won’t use our input as part of their deep learning? Could that lead to criminal breadcrumbs, even for something as simple as using ChatGPT to cheat on a school assignment? Could that create echo chambers where users are given more information according to what they already know, or have expressed interest in wanting to know, or generally have the best chance of responding and continuing to use the engine?

I guess this should be obvious to me, but re-listening to these episodes stirred a fair amount of uncertainty and suspicion about what we are going to do with AI, and as Roose put it, whether we are “ready for it” or putting ourselves at risk. At the time these were published, this was still a relatively new conversation in the mass-public view. What I didn’t expect was how undeterred Roose and Newton were in responding to questions about AI from their regular listeners. A lot address a moral quandary with serious implications: lawyers using “hallucinated” references from ChatGPT in court, the environmental footprint of training and using AI models, the possibility of AI offering emotionally-censored content, using AI-generated content to replace human connection, and whether it crosses a line to create generated versions of deceased loved ones. This is personal opinion of course, but it’s a yikes from me on that last!

Note: there are some breaks which consider the wild world of venture capitalists and searching for your ex on LinkedIn, which I’ll point out are from 15:00-25:00 and 38:00-41:00.

What made all of this feel worth reflecting on is that despite the seemingly rampant issues with AI, Roose and Newton are not just undeterred, they’re… self-decidedly “optimistic.” This has a more inclusive-minded approach later in the conversation when they consider how AI can bring better accessibility to the internet and daily, real-life functioning for folks living with disabilities, for one, which is important for me to consider as well – looking at the big-picture implications of living with and continuing to advance AI is going to include a lot of things that aren’t part of my wheelhouse and perspective.

Beyond that, one call-in from a colleague is quickly, kind of quietly, answered with good spirits from the hosts. The caller asks why researchers are so confident that chat bots will continue to get better. Roose and Newton respond that there has been a clear upward trajectory in all of ChatGPT’s versions teaching the next version how to be better, and there is no sign that it’s breaking any Scaling Laws (ie for us uninitiated folk, getting too big for its britches) even with its massive growth over the last 5-10 years. Even when some people hope for developments to slow down, when we have tangible experience in AI going wrong, when we have concerns about how this will affect our relationship to the land and to each other, those who are thick in the weeds just… aren’t worried much. In a time where everyone is encouraged to do their own research, question the narrative, etc, I surprised myself by taking all of this at face value.
It was relieving to remember that those who are closest to keeping up with this walk don’t react as jarringly as I do – the big waves are informed by many smaller movements.