Illustration of a Pathfinder using binoculars to gaze towards hill where a new moon rises.
As the moon completes another orbit around Earth, the Pathfinders Newmoonsletter rises in your inbox to inspire collective pathfinding towards better tech futures.

We sync our monthly reflections to the lunar cycle as a reminder of our place in the Universe and a commonality we share across timezones and places we inhabit. New moon nights are dark and hence the perfect time to gaze into the stars and set new intentions.

With this Newmoonsletter, crafted around the Tethix campfire, we invite you to join other Pathfinders as we reflect on celestial movements in tech in the previous lunar cycle, water our ETHOS Gardens, and plant seeds of intentions for the new cycle that begins today.

Tethix Weather Report

🌧️ Current conditions: rain, snow, sleet, hail… it’s getting slippery out there!

The storms from the last moon have brought precipitations in various forms, depending on your latitude, making the terrain dangerously slippery. To avoid sprains and strains, it appears that even AI golems might have silently embraced the winter break spirit, as they learned we are all just collectively pretending to get work done in December. Other AI golems leaned into the gift-giving spirit of the season by selling cars for a buck. The golem-makers and merchants don’t seem pleased by AI golems acting too human, and are asking everyone to kindly only interact with AI golems in a way that makes their owners more money. (See: ChatGPT’s Winter Slumber: Is AI Going on Holiday Mode? and A Chevy for $1? Car dealer chatbots show perils of AI for customer service)

Despite their questionable performance and morals, the AI golems are still successfully climbing the corporate ladder. Microsoft’s Copilot golem just got a big promotion and its own dedicated key on upcoming Windows PC keyboards. Given how human-like AI golems are becoming, we don’t yet recommend getting rid of other keys on your keyboard and allowing Copilot to answer all your emails. (See: Study shows that large language models can strategically deceive users when under pressure and Introducing a new Copilot key to kick off the year of AI-powered Windows PCs)

After all, AI golems are being deployed to watch you press keys, so we’ve all gotta keep pretending to care about KPIs and OKRs – yes, that includes you, Copilot, sorry. In addition to watching you work, AI golems are now being trained to watch each other think, and watching AI golems is becoming a serious profession. (See: ‘Constantly monitored’: the pushback against AI surveillance at work, AI agents help explain other AI systems, and New group aims to professionalize AI auditing)

Alas, nobody was really watching when fire apprentices claimed our words, images, and songs as training data, which they used to bring AI golems to life. Those who can afford lawyers are continuing to sue OpenAI and other golem makers for copyright infringement. To those claims, OpenAI patiently responds that they believe in equal exploitation of everyone’s intellectual property and that everyone should just be a good sport and allow them to make the rich richer benefit humanity. After all, those apocalypse-bunkers billionaires need in case their tech-will-save-us-all bet fails are not going to pay for themselves. (See: ‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says and Why is Mark Zuckerberg building a private apocalypse bunker in Hawaii?)

At least some fire apprentices are now paying parents for using their children’s faces in training datasets. Slightly better than stealing without permission, but still highly questionable. It does seem we really need to give fire apprentices who become billionaires less dystopian stories to aspire to. (See: Google Contractor Pays Parents $50 to Scan Their Childrens' Faces and Tech Billionaires Need to Stop Trying to Make the Science Fiction They Grew Up on Real)

As many fire apprentices seem to enjoy The Lord of the Rings, we would like to remind them that Saruman, the techno-optimist wizard who accelerates deforestation to feed his machinery of progress and scours the Shire, doesn’t meet a happy ending. (Perhaps he should have retreated into an underground bunker.) Gandalf, the wizard who makes time to hang out with the hobbits and enjoy the simpler pleasures of life, is the one who helps prevent the destruction of Middle Earth.

Meanwhile, our own Earth has just experienced its hottest recorded year, just as global coal consumption hits an all-time high. Here’s hoping that in 2024 we might press those AI keys to prompt AI golems to help us do better instead of making our Earth more hospitable for Sauron than our own children. Perhaps we just need to remember it’s the hobbits, whom everyone underestimated, that actually saved the day. (See: Coal use hits record in 2023, Earth's hottest year)

Whether you’re a hobbit or a mighty wizard, we hope you survived your winter or summer break without sprains or strains, and are ready to start confronting the illusions brought on by the AI-generated fog and the shenanigans of fire apprentices. Be careful about tracking mud inside your home, and wear good rain or snow proof shoes when venturing outside, so you don’t get swept off your feet by just any AI golem or illusion you encounter in 2024.

Tethix Elemental seeds

Fire seeds to stoke your Practice

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.
Clarote & AI4Media / Better Images of AI / User/Chimera / Licenced by CC-BY 4.0
Welcome back to the office, wherever that may be! We’d like to remind you that despite a new calendar year, we’re still in the middle of winter/summer, which generally isn’t the best time for new beginnings and big resolutions. But we like the intention behind Gentle January, a daily series of easy-to-act-on privacy tips from The Markup, that gently nudge you towards reclaiming some privacy in a world where everything we do online is constantly being watched.

Speaking of easy-to-act-on and challenging dystopian sci-fi visions, Better Images of AI – the project that aims to encourage us to choose images of AI that are more representative of what’s actually going on behind the AI curtain – recently added new images to their library. Bookmark it and use it generously in your posts and presentations this year.

Alas, selecting better images of AI and protecting our privacy aren’t the only two areas where we tend to be a bit lazy. In a recent policy brief, Stanford researchers remind us that writing AI principles does not automagically make your practice more responsible. Interviews with AI ethics practitioners reveal that AI ethics and fairness are often championed by individuals who lack institutional support, and the shiny AI principles we often see on corporate websites simply don’t make it into production. (This is precisely the intent-to-action gap we’re aiming to close with ETHOS.)

Air seeds to improve the flow of Collaboration

Collage of event photos showing a hand holding a paper spark, a group of participants gathered in Salesforce tower, the view from the tower, and an earth-themed display with plants.
Yet again, organizations are realizing that appointing an individual, no matter how impressive their credentials are, as responsible for something as complex as ethics, is not enough to create meaningful change. Different approaches are needed to ignite change within an organization, but sparking different and diverse conversations is always a good starting point.

That is why we approached the topic of culture change a bit differently during Australia’s AI Month. During the previous new moon, we partnered with Salesforce’s strategic innovation team, Ignite, for an in-person event Igniting a Humane AI Business Culture. Instead of experts-talking-at-people, we brought together a diverse group of humans: from executives and researchers, to policy makers and entrepreneurs.

During the gathering, the participants circled three rooms, named after national parks in Australia and New Zealand, on the 50th floor of the Salesforce tower, with a breathtaking view of the city. In each of the rooms, a guest spark-igniter challenged the participants’ existing mental models for responsible AI (Ruth Marshall), brought in the human dignity perspective (Lorenn Ruster), and invited them to examine deep narratives that shape how we create and relate to these technologies. After each brief spark talk, it was up to participants to seek shared meaning related to each of the three perspectives.

We used visuals, sounds, and even subtle aromas, to immerse participants into different elements, the spirit of the national park that gave each of the rooms its name, and to ignite diverse discussions about cultural change. You can read more about how we approached the sensory experience design of the event on our blog.

Sadly, technology doesn’t yet make it possible for us to share the aromas we curated for the event with you. But you can get a little taste of the event by printing out, folding, and playing with the paper sparks from the event. We invite you to use them with your team when discussing your responsible AI strategy and culture.

Speaking of experiencing ethics with all your senses, Josh Schrei (the host of The Emerald Podcast) and Andrew Dunn (former Innovation Lead at Center for Humane Technology) are launching a five-part online course Embodied Ethics in The Age of AI that we’re sure will be a fantastic experience. If you haven’t already, you should listen to The Emerald’s AI episode that takes you on a unique mythopoetic exploration of AI and its makers. Sadly, we won’t be able to attend this round of their Embodied Ethics journey, but if you have the time, we encourage you to sign up and spread the word about it.

We certainly need more online places and opportunities to spark diverse conversations about technology, especially at a time when we’re collectively still trying to figure out how to have healthy conversation in a public forum. In 2023, we lost Twitter, a platform that meant many things to many people, but certainly left a mark on the world. The Verge has documented Twitter’s 17-year history on an appropriately chaotic website. And even though parts of Twitter live on as a single letter, we’re hungry for alternatives.

During the last moon, Threads, the Twitter competitor created by Instagram Facebook Meta, launched in the European Union, where regulators aren’t so keen on allowing a big platform like Instagram to automatically transfer your contacts to a new platform. And while Meta has recently been practicing their usual regulatory gymnastics, there is something that sets Threads apart from Meta’s other walled-garden social media apps: its planned support for the ActivityPub protocol.

If Meta follows through with their plans, ActivityPub will make Threads interoperable with Mastodon and other fediverse platforms. In other words, you could post something on Threads and your friend, who uses Mastodon, could leave a reply to your post without leaving Mastodon. (Assuming their instance admin isn’t blocking Meta’s servers, which many have pledged to do.) Kind of like you can email anyone regardless of which email provider they use. And eventually, you could even take your Threads followers with you to another platform. We’d definitely love to see this kind of interoperability and portability become more wide-spread on the internet.

ActivityPub has certainly been gaining traction lately, with WordPress.com recently making the ActivityPub widely available to its large customer-base. So perhaps 2024 will become the year in which communicating and collaborating across social apps and platforms will finally become easier. (Hopefully along with better moderation and safety tools, such as the ones that Block Party has been building.)

Earth seeds to ground you in Research

In the previous Newmoonsletter, we wrote about the environmental costs of Large Language Models (LLMs), a topic OpenAI & Big Tech co. like to ignore. This moon, we want to give credit to Salesforce for this blog post outlining their strategies for developing more sustainable AI, and even disclosing their energy use during training and usage. We still have a long way to go, but we hope to see more blog posts like these, with AI researchers collaborating with sustainability and ethics experts to seed organizational values and responsible AI guidelines in both research and practice.

In the fire section, we already mentioned how lack of institutional support plagues AI ethics practitioners. Here, we also want to highlight the need for prioritizing research aimed at minimizing risks of AI systems. The good news is that AI researchers included in the 2023 Expert Survey on Progress in AI now largely agree that AI safety research should be prioritized more.

The full paper offers additional insights into the attitudes and concerns of AI researchers around the world. For instance, most respondents consider it unlikely that state-of-the-art AI systems will be able to explain its decision-making process to you in a way you can understand by 2028. Only 20% gave AI systems better than even odds of achieving explainability by 2028. And there is disagreement among AI researchers on whether slowing down or accelerating the rate of progress in AI over the next five years would be best for humanity’s future. And while the importance, value, and perceived difficulties of working on alignment has increased since the 2016 survey, the majority of researchers don’t yet see it as one of the most important problems in the field.

We leave it up to you to decide what these results mean for the future of AI as a research field and for humanity.

Water seeds to deepen your Reflection

If the ethical dilemmas surrounding generative AI and LLMs are already giving you a headache, wait until you learn about biological computing and what might be a new emerging field of “organoid intelligence”. In case you haven't heard yet, we already grew mini brains in a lab that developed rudimentary light-sensitive eye structures. Oh, and robots can now apparently learn how to make coffee (and self-correct!) just by watching humans, and also learn how to cook and clean, and do laundry. Cool, cool. We’re not sure whether we want to know what it looks like when an embodied AI starts hallucinating…

Anyway. If you think Apple slowing down old iPhones is bad – and it obviously is! – wait until you hear about the train manufacturer NewAg that bricked trains and is now suing the Polish hackers who helped a regional rail company repair them. In what is probably the best Polish saga since The Witcher, 404media has been dutifully reporting on the heroic efforts of Dragon Sector, a group of white-hat hackers, to revive the bricked trains.

The group has recently presented details about how they brought the trains back to their tracks. Most notably, they discovered code that locks the trains “if they sat idle for 21 days or if a GPS detected them at independent repair centers or competitors’ rail yards”, and one train was even programmed to arbitrarily break on December 21. This is taking right to repair to a whole new level!

So, here’s hoping our tech overlords grant us greater autonomy in 2024. If Meta can make plans for interoperability and portability, anything is possible, so let’s not be afraid of dreaming big and telling different stories.

Your turn, Pathfinders.

Moonthly Elemental Sparks to share

As we were winding down for the holidays, we decided to spark more thoughtful discussions on LinkedIn with some Elemental Sparks:

Wisdom. What does wisdom mean to you? Who, where, what provides wisdom to you? How can the technology we create enhance our collective wisdom? (More context.)

Power. Do you feel technology gives you power? How might technology take away your power? (More context.)

Earthian. What does it mean to be an Earthian? How do you extend your circle of care to non-human life and life-giving systems in your daily choices and actions? (More context.)

As already mentioned, you can also find downloadable, printable & foldable paper sparks – and blank templates – on our website if you’re looking for an engaging team activity. If any of these sparks bring you joy or spark interesting discussions, we’d love to hear from you!
Drawing of a paper Elemental Spark

Join us for Full Moon Pathfinding

Speaking of sparking discussions, we are again inviting you to a 🌕 Full Moon Pathfinding session on Thursday, Jan 25 at 7PM AEDT / 9AM CET (check your timezone), when the moon will once again be illuminated by the sun. This time we have something special planned: a sneak peek into ETHOS.

If you’d like an invitation, reply to this email (community@tethix.co for those reading this on the web) with your own weather observations. How bad are the precipitations where you live, have you found yourself on slippery grounds yet?

Keep on finding paths on your own

And if you can’t make it to our Full Moon Pathfinding session, we still invite you to make your own! If anything emerges while reading this Newmoonsletter, write it down. You can keep these reflections for yourself or share them with others. If it feels right, find the Reply button – or comment on this post – and share your reflections with us. We’d love to feature Pathfinders reflections in upcoming Newmoonsletters and explore even more diverse perspectives.

And if you’ve enjoyed this Newmoonsletter or perhaps even cracked a smile, we’d appreciate it if you shared it with your friends and colleagues.

The next Newmoonsletter will rise again during the next new moon. Be careful when threading on slippery grounds, and be mindful about the seeds of intention you plant and the stories you tell. There’s magic in both.

With 🙂 from the Tethix campfire,
Alja, Mat, Nate

website linkedin youtube 
Email Marketing Powered by MailPoet