Can we bring ethics into our work with AI?
Jumping straight into prompting means that we’re working within the predefined structures of the platform we’re using with whatever biases they have baked into them. Is there anything we can do to change that?
Building on my last post about working with context when using LLMs, I’ve been working on an experiment to see how much we can bake in a simple ethical framework to improve the working relationship and outcomes from working together.
In this context, working on a web app, I started not just thinking about accessibility but how we might incorporate greater inclusivity in what we create, considering internationalisation challenging Western biases in the code and tests that LLMs produced. I’m not going to say this is a perfect solution or without flaws, but it does appear to have had a positive impact.
An evolving framework
Starting with a few notes of my own, this framework is an evolving piece - working collaboratively with the agent to look for improvements as we work, encoding what I feel is good practice. Noting to myself that this in turn has the potential to factor in any innate biases I might have myself.
What emerged was a Markdown file that I relate to from my core context file that captured this living document. The charter has grown and loading it all into context can be a large chunk of tokens - so ask the LLM to condense when it needs from it and refer back to the full document when needed. As we worked together, I’d see differences or references to the charter in responses from the LLM. With the file being written in Markdown, it can easily be formatted in a human-readable form - this is a document that applies to both humans and agents alike. It sets some expectations of how we all think about these concepts and how we relate to one another.
This document gives clarity to the role agents and people assume, making clear the working relationship and expectations, then going deeper into what we mean by inclusion and bias in our work. We highlight international differences and cultural norms. In this form, we can improve how the LLM interprets and responds, but also use it as a sense check of our own work, asking it to review what we’ve created with the charter in mind, making a comparison and leaving it less open to interpretation.
Not a fix but…progress?
I’m not naive enough to think a single file will instantly make an LLM free of the inherent biases of their creators or that this opens the door to some design system utopia, but it feels like a nudge towards a better direction.
I’d love for folks to give this a try in whatever you create. Let me know if you feel it makes any impact, but I’m also very keen to see how it could be strengthened and improved to ensure that it does the best it can for us in the scope of what it can impact. Of course you can alter and interpret all of this to match your application and bring an intentionality to how you approach these topics and encapsulate them in a form like this.
Feel free to use in your work, fork and improve upon it and its well worth reading Cennydd Bowles book Future Ethics if you’re interested in the topic.