AI Charter
Introduction
These core principles aim to guide the ethical development, deployment and use of Artificial Intelligence within Datch, and provide thought leadership on these issues outside of Datch. The principles are deliberately structured in a "this enables that" manner, following a similar pattern set by the Agile Manifesto. This phrasing emphasises that the "this" principle should take precedence because it enables or enhances the “that” principle. The precedence implied by this phrasing of the principles should guide development teams within Datch and explain our philosophy to stakeholders outside of Datch.
Datch’s AI principles are:
Accessibility enables Inclusivity: At Datch, our objective is to create AI tools that can benefit all members of society. This aligns directly with our mission of empowering frontline workers with AI tools designed to augment their productivity. For example, when developing our speech recognition system, we build the technology in such a way that it is capable of accurately transcribing speech from a variety of English accents, as well as from those for whom English is a second language. As a result, the benefits of accessible AI extend to be inclusive of underrepresented populations.
Provenance enables Explainability: Provenance (also known as data lineage) is an important principle at Datch. We want to maintain a clear and traceable chain of sources for all data and suggestions. This practice ensures transparency, assists with data governance, and fosters trust in our AI systems. For instance, we aim to reference the original source of information whenever a query is answered using generative AI. Provenance is critical for explainability, which is the ability to make the AI's reasoning understandable to its users. Explainability can only be as good as the quality and reliability of the underlying data lineage, which is why we emphasise the provenance principle.
Alignment enables Safety: Alignment, in our context, means designing AI systems to benefit humanity, avoid unintended side effects, and respect human autonomy. Our goal is to develop AI that can automate routine tasks, thereby freeing humans for more meaningful work, all while preserving human dignity. Central to this philosophy is the belief in obtaining explicit, informed consent before any AI system acts on a user's behalf. This approach ensures that the expert user can override any safety concerns of the system. For example, in an emergency situation, a well-aligned AI should allow for expert intervention, demonstrating respect for the user’s practical needs and expert judgment, even if these actions might seem to contradict the system’s safety guidelines. Only an AI system with such a nuanced understanding can be considered truly safe to use in an industrial environment where ultimate control and accountability for the tool's use should rest firmly with the user.
Transparency enables Privacy: It is essential for an AI company to be transparent about how users' data will or will not be used and how that data will be kept safe. Clear communication on this matter fosters trust and engagement with the system. Preserving user privacy and adhering to local regulations are clearly important. Additionally, it is important that we are able to measure and improve our AI’s accuracy and reduce bias. The effectiveness of AI is directly tied to the quality of training data used to teach it, so we need to carefully use some user data to improve the overall quality of the system. We aim to be clear and transparent in our communications, explaining how, where and why we use user data and the benefits this provides to our users. By responsibly utilising user data to measure accuracy, we enable the implementation of a fully private solution, informed by a clear understanding of its trade-offs and efficacy.