Humanity in the Age of AI: A Conversation with Richa Dwor

What does it mean to be human in an age of rapid technological growth? As artificial intelligence becomes a dominant force shaping how we communicate, learn, and interpret the world, conversations about what to preserve of our shared humanity are becoming essential.

In a recent dialogue, Douglas College’s Richa Dwor shared her reflections on human dignity, critical thinking, and why reading on paper might be a small but powerful act of resistance.

Richa currently serves as the chair of the English department, and the coordinator of the Research and Innovation Office for the college. A scholar of Victorian literature, she focuses on Jewish women’s writing from that period, and also religion and affect; categories of emotions that people feel within and through religious experience and in religious communities.

“If at a bare minimum, if what we can do is look at another person and say you can be in this world, and you can be in the world with dignity, and maybe that’s all I can give you, because I can’t understand your way of thinking… I think that, at bare minimum, is humanity. Humanism. It’s extending a kind of tolerance towards one another. To me, that’s what, at bare minimum, that’s what tolerance looks like. And then you can build onto that with compassion, empathy, kindness, generosity.”


Critical Capacities We Can’t Afford to Lose

“Firstly, we need to retain our critical competencies, our ways of thinking, our modes of thought that are grounded in the humanities. So by this, I mean the ability to understand and hold in conversation multiple points of view. The ability to think historically, and to understand the progress of ideas and events as they have occurred, and to recognize that our own moment is not inevitable, and that its impact on the future is hard to understand.”

“And the other thing we need is access to artifacts of the past. And to preserve what we have of our shared cultural heritage.”

On Interpretation, Paraphrase, and Translation

One of the richest parts of our conversation centered on language, reading, and interpretation. Richa emphasized:

“Paraphrase and close reading are exactly what I teach students to do. It’s the biggest part of the intellectual work that I’m guiding students to develop, which is… once you’ve found [a source], reading it and understanding it. This cannot be lost. We cannot let the machine become the interpreter.”

“If we no longer perform our own act of summary and paraphrase, it means that we no longer are able to engage in close reading of a source sufficient to develop our own comprehension of it. And that is at the root of humanistic ways of thinking.”

“You can’t have true discourse with AI. The interpreter is still a person, a human in the community… even if there’s an interpreter, it should still be possible for others in the community to themselves engage in their own act of interpretation, right? Which may challenge what the interpreter has delivered.”

“If I were to publish material from another language, and I wanted it translated into English, I would never, ever, ever, ever publish an AI translation. I would always hire a human translator. So, it’s one of those things that is useful, but human act of translation are irreplaceable.”

Asked for a practical takeaway, Richa’s advice was clear:

“Read a book. On paper. Just sit down and read the newspaper, or read a book on paper, and just think about it a bit. Read the next one. Yeah, just, just, time and concentration with text, so that you can flex your capacity for paying attention. And for reflection.”

As the conversation turned more personal, both of us reflected on moments of identity change. Richa offered this poignant insight:

“Life is not linear. We are living in a state of chaos. As Joan Didion says, we tell ourselves stories in order to live, so all of the stories that we tell ourselves are just attempts to impose a kind of rational narrative sequence on what is otherwise just one damn thing after another. And there’s a kind of peace that comes from getting rid of that narrative and just recognizing that it’s just one thing after another, and all I can do to center myself is just, like, just try to be good, and be good to other people, and just, like, ride it.”


And perhaps that is the heart of the matter. Technologies will evolve. But the work of recognizing, interpreting, and relating — remains human.

About this series: “Rescuing Our Humanity” is a dialogue series hosted through the Douglas Educational Support Community (DESC), exploring what it means to be human in an age of rapid technological transformation. Each conversation invites faculty, staff, and students to reflect on what we must protect, preserve, or restore of our shared humanity.

Attending to What Matters: A Dialogue on AI, Empathy, and the Costs of Convenience

An interview with Dr. Jennifer Jill Fellows, Philosophy & Humanities instructor

image from https://todaytesting.com/free-social-media-marketing-free-images/; Wikimedia Commons

As artificial intelligence continues to expand into classrooms, workplaces, and even our private lives, Dr. Fellows reminds us that technological convenience always comes with a cost.

A philosophy instructor and Associate of Arts Coordinator at Douglas College, Dr, Fellows is currently researching our growing relationships with AI companions — from “grief bots” modeled after loved ones to conversational agents like ChatGPT and Replika. Her upcoming course, Ethics and Technology (Winter 2026), will explore these themes in depth.

A central point in the dialogue is that even when chatbots aren’t designed to be companions, many people still form emotional attachments to them. We anthropomorphize — we imagine what it’s like to ‘be’ them — and that’s both a human superpower and a vulnerability.

In our conversation, Dr. Fellows reflected on what happens when we begin offloading moral and cognitive skills to machines. While using AI for efficiency may feel harmless, she cautions that it mirrors sending a robot to work out for us: we don’t actually strengthen our own intellectual or ethical “muscles.”

She draws from philosopher Shannon Vallor’s concept of Moral Deskilling, suggesting that empathy, care, and attentiveness are not fixed traits but skills that must be practiced. When digital companions respond without real needs or suffering, we lose opportunities to cultivate genuine empathy — even as loneliness rises.

Dr. Fellows also highlights the environmental and human costs of generative AI: massive energy use and the emotional toll borne by underpaid content moderators who make these systems safe for public use. “All convenience comes with a cost. You give something up to gain something.” she says.

For educators, her advice is both practical and profound:“And so I would ask people to be very mindful about what the costs are to themselves, what the costs are to their communities, what the costs are to the planet at large, and to really think about whether and how if they’re going to use these tools, how they’re going to use them, whether they’re going to use them. And for educators specifically, my other piece of advice would be to really think about what we want students to gain from these classrooms.”

Perhaps the deepest lesson came near the end of our exchange. Quoting the French philosopher, mystic, and political activist Simone Weil, Fellows emphasized that true attentiveness means holding one’s desires in check to receive from others and from the world. In an attention economy designed to feed our wants, that kind of listening may be the most radical skill of all.

A few links to topics mentioned in the recording
Thomas Nagel “What is it like to be a bat?
Joseph Weizenbaum and the ELIZA Chatbot
Authenticity in the age of digital companions by Sherry Turkle
Long may you run by Neil Young
MIT Research on impact of AI use on the brain
Simone Weil – Attention as a Moral and Psychological Act
Cyborg Goddess podcast episode with Dr,. Nicole Ramsoomair: The costs of convenience

Here are three lists addressing categories of concern with generative AI and technology, and based on conversations with educators. Nothing definitive; just playing with the ideas.

Human skills and tasks we feel comfortable offloading to technology

  • Data storage, retrieval, and archiving
  • Complex calculations, analytics, and pattern detection
  • Information searching, sorting, and summarizing
  • Scheduling, reminders, and basic administrative workflows
  • Navigation, mapping, and real-time traffic updates
  • Translation for basic comprehension (not nuance or cultural depth)
  • Routine manufacturing, repetitive assembly, and hazardous tasks
  • Predictive maintenance and environmental monitoring in buildings or infrastructure

Human skills and activities we feel unsure about offloading to technology

  • Decision-making in morally ambiguous or high-stakes contexts
  • Education design and assessment that require understanding of learner needs
  • Creative work where originality, cultural resonance, or personal meaning matter
  • Complex negotiations and conflict resolution
  • Emotional support, counselling, and mentorship
  • Interpretation of art, literature, and cultural heritage
  • Environmental stewardship decisions with long-term community impact
  • Cross-cultural communication where subtle context matters

Human attributes that we need to protect, rescue, and restore

  • Empathy, compassion, and ethical discernment
  • Deep listening and presence in interpersonal interactions
  • Embodied awareness and sensory connection to place
  • Critical thinking and independent judgment
  • Patience, persistence, and the capacity for sustained attention
  • Civic engagement and shared responsibility in community life
  • Imagination and the ability to envision alternative futures
  • Wisdom drawn from lived experience and intergenerational knowledge

The Psyche and Artificial Intelligence

Psychology faculty members Shahnaz Winer and Joe Thompson joined a conversation about how technology is reshaping minds, methods, and what we call the “psyche.”

Dr. Winer, a professor of psychology in her second year at Douglas College, described her focus on the brain–mind question and the nature of consciousness—work now inseparable from advances in artificial intelligence. She recently supervised an honours thesis exploring how tools like ChatGPT might guide a cognitive-behavioural therapy exercise to help with anxious thinking. Shahnaz shared her thought that our brain is a beautiful bridge between our humanity/physiology and our ‘divinity’—the abstract self we call psyche or soul, noting how AI’s rise makes these questions newly urgent.

Dr. Thompson, Chair of Psychology and Social Sciences, studies how naturally occurring data—for example, from people who play video games—can power new research. He also zoomed out to a big-picture concern: humans routinely overestimate attention and multitasking ability. Whether walking or driving, divided attention can turn risky. “The cognitive system is very limited attentional resource, and people tend to overestimate how many resources they have”.

The conversation traced psychology’s roots—psyche as “breath” or “soul”—to today’s scientific lens on the brain, touching on the long-running debate about whether humans are fundamentally special or simply part of nature’s continuum. Both guests urged caution with fast-moving AI systems whose inner workings can be opaque. Because generative models are trained on human data, they can inherit our biases—making thoughtful guardrails and regulation part of the work ahead.

Dr. Winer emphasized what to protect as more tasks are offloaded to machines: empathy, connection, and community. “Our relationships and care for one another can’t be replaced,” she noted, adding that AI is still “in its infancy” and can be shaped to benefit humanity. Thompson closed by returning to an ancient reminder from Delphi—Know Thyself—arguing that we should use AI carefully until we better understand our own cognitive limits.

“What Makes Us Human?”: A Political Theorist Reflects on Freedom, Technology, and Awareness

In a recent dialogue hosted at Douglas College, I had the pleasure of sitting down with Dr. Jovian Radheshwar, a political science instructor and political theorist, to explore what it means to be human in an age shaped—and increasingly reshaped—by artificial intelligence.

Jovian opened our conversation with a stark observation: we are witnessing a global slide away from freedom, as technology merges with authoritarian politics to normalize social control. Drawing from Greek philosophy, he highlighted how the ancient word idiota referred not to someone lacking intelligence, but to one who is absorbed solely in private concerns, disconnected from the polis—the public world of shared concern. In contrast, true intelligence (physis in Greek) was linked to being deeply enmeshed in one’s surroundings, aware and responsive to the world.

Jovian offered a compelling critique of today’s techno-culture—especially the AI-driven promise of transcendence from human limitations. Figures like Peter Thiel and movements like Transhumanism, he argues, are rooted not in empathy or collective uplift but in a desire to dominate, to become “supermen” at the cost of humility, biodiversity, and social connection.

We reflected together on what awareness looks like in practice. For me, riding a bike through the city has become a daily exercise in sensing others, slowing down, and learning to pick up on the emotional “weather” of a space. Jovian connected this to urban survival skills—what he called a kind of “street ninja” intelligence—and also to the loss of embodiment that comes from excessive digital distraction.

Our conversation turned to education and the risks of reducing learning to a game of performance metrics. Jovian made a passionate case for embracing failure, for cultivating diverse skills, and for protecting the space to explore what we might be good at—beyond what the system rewards. He quoted Heidegger to illustrate how modern technology shifts us from “taking care” of the Earth to “challenging” it—a shift with profound ethical and ecological consequences.

So, what skills should we protect? What human capacities must we nurture?

Jovian suggests starting with awareness, reflection, the capacity to fail, and the courage to imagine freedom—not as domination, but as mutual care.

This is just the beginning. We both agreed: this conversation needs a Part Two.

Inclusive Teaching with Sarah Skinner


We caught up with Sarah Skinner, an Early Childhood Education instructor, about her first year in a full-time faculty role and her evolving approach to inclusive, student-centered teaching. Sarah shares how Universal Design for Learning (UDL) principles, multiple means of expression, and real-world inclusive strategies are shaping her courses—especially in asynchronous online environments.

Sarah reflects on her recent attendance at the Canadian Society for the Study of Education (CSSE) conference and shares a few memorable sessions and takeaways, including using case studies based on refugee families’ lived experiences and incorporating visual note-taking into learning activities. These approaches center student agency and creativity while maintaining clear learning goals.

Key Topics

  • Sarah’s background in occupational therapy and inclusive education
  • Implementing Universal Design for Learning (UDL) in practice
  • Tools like Padlet for engagement and collaboration
  • The value of choice in learning modalities (e.g., essays, podcasts, paintings, carvings)
  • Conference reflections from CSSE and ideas for case-based learning
  • Explorations of visual note-taking in asynchronous online learning
  • The importance of preserving joy, diversity, and human-centered learning in the face of increasing automation and AI

Tools & Teaching Strategies

  • Padlet for visual and collaborative learning
  • Kaltura media tools for multimedia sharing in online courses
  • Visual note-taking as an inclusive and creative learning strategy
  • Gallery walks as assessment and dialogue tools
  • Flexible assessments (papers, songs, carvings, videos) that align with consistent learning outcomes

Takeaway Quotes

“Inclusive education is education.”
“UDL is about fixed goals and flexible means.”
“We’re not just looking for a well-written essay—we’re looking for understanding.”
“When students are engaged and joyful, they’re more likely to think for themselves.”


🔗 Related Links

Access Sarah’s presentation via the link below:

Padlet-for-Choice-and-Flexibility-1


Sarah welcomes collaboration and conversation with other Douglas College instructors who are interested in inclusive teaching, creative assessments, or translating hands-on activities to online environments.