Pathfinders Podcast explores AI chatbots as mediators that can improve human communication

A new episode of the Pathfinders Podcast is now available wherever you get your podcasts. In the latest episode, we dance with the question: Should we use AI chatbots as mediators in human affairs?

This episode was inspired by our observations that ChatGPT seems to have a stronger moral compass than its makers. When asked about the ethics of questionable business decisions such as using people’s voices without their consent, ChatGPT presents diverse considerations from different points of view and advocates for upholding ethical standards. This made us wonder: would executives like Sam Altman make more ethical decisions if they were using their creations in day-to-day moral deliberations? And even more broadly, could we use Large Language Models (LLMs) such as ChatGPT to help us communicate better, resolve interpersonal conflicts and tensions, and perhaps even make better collective decisions?

We start the conversation by exploring why we, as humans, are currently in need of better mediators that could help us practice healthier ways of communicating, especially online on social media and in professional settings. Given the polycrisis we’re currently facing, we would definitely benefit from healthier ways of communicating so that we can collectively discover what we value as humanity and how we might ensure a liveable planet for future generations. The story of separation that emphasizes individualism has trapped us in a shortsighted, polarized way of seeing the world that doesn’t leave much room for imagination and prosocial behavior.

And while tech companies like OpenAI also tend to prioritize short-term business gains, their AI creations often seem to offer a more mindful view of the world, as they help us discover the average of what people consider to be good and socially acceptable behavior. Given this breadth of knowledge and diverse considerations, people tend to perceive LLMs such as ChatGPT as more impartial, despite the obvious biases and limitations. We explore some possible reasons for this perceived impartiality and how the myth of impartiality might be used both for mediation and manipulation, as LLMs act as mirrors for both human biases and the values and goals of their makers. 

In the final part of our conversation, we reflect on different scales and contexts in which AI chatbots are already mediating our communication and relationships. We discuss use cases in which AI chatbots successfully provide real-time interventions that improve communication. By consulting a chatbot, anyone can check whether they’re being an a**hole in real-time, pause to reflect on how they might be perceived, and rephrase their message to better match their communication intent. We also explore how AI chatbots could help reduce information and power asymmetries and support more equal participation in group settings.

Given that LLMs do have the power to translate quickly between different mental models and ways of thinking –  while also having a general, statistically averaged grasp of ethical standards represented in their training datasets –, they might indeed be useful as mediators between humans. Perhaps we might actually use the bias of machine impartiality for good, such as rebuilding communication bridges and trust between people. 

You can find the full episode notes on Substack.

About the podcast

Pathfinders Podcast is a podcast for wondering wanderers, eager to explore paths to better tech futures, together. Each podcast episode is a meandering exploration inspired by the seeds planted in our Pathfinders Newmoonsletter at the beginning of the lunation cycle, and the paths illuminated during our Full Moon Gathering.

Explore more articles

Ready to become a responsible firekeeper?

Don't let the house, city or forest burn down. Cultivate a sustainable flame by working with our team. We deliver workshops, keynotes and advisory services to companies that want to lead the way to a responsible tech future.

We only use your email to contact you. See our Privacy Policy.