Fair and explainable AI
Read our blog post "Dealing with bias in AI systems" for a more detailed exposition on bias in AI, and on how Starmind deals with this.
Starmind helps people to work together and exchange knowledge more efficiently. Starmind has an impact, not just on how individual users work, but also on entire organizations, and by extension on society as a whole. We consider it to be our ethical responsibility to ensure that this impact is positive and inclusive. We develop Starmind's AI to empower and enhance human intelligence, without prejudice or discrimination.
Bias is defined by Wikipedia as "a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial or unfair". In recent years, there has been a growing concern about various forms of bias in AI applications, putting the topic higher on the agenda of vendors, customers and regulators. Our data scientists actively follow and apply the newest method for discovering and mitigating different types of bias in AI models, to ensure that Starmind treats its users and their data in a fair way. This is not an exact science. Concepts such as bias, fairness and justice depend on the historical and cultural context. In order to still have a systematic approach against bias, Starmind's AI research is guided by six core principles: justification, explanation, anticipation, reflexiveness, responsiveness and auditability.
Justification
The benefits of using Starmind outweigh any risks associated with the use of AI.
The benefits of Starmind are measurable: metrics can show how much time users save by getting the right answers to their questions in a short time. The use of AI is essential to achieve this productivity gain. Manually creating and maintaining an equivalent database of "who knows what" within an organization, would require a prohibitive amount of resources. Starmind is not intended to be used for higher-risk use cases, such as performance assessments.
Explanation
Starmind's AI is able to give reasonable and useful explanations for its decisions.
The main decisions made by Starmind's AI are focussed on associating users with their topics of expertise. These topics are represented by human-readable labels, enabling users to easily understand what topics they are associated with in Starmind. Users also see, for each associated topic, an explanation for this connection based on past interactions the user had with the topic (see also: the profile of a user).
Anticipation
Starmind provides appropriate channels for users and customers to report and/or correct anything they perceive as incorrect or biased.
Starmind users can provide feedback from within the application, which is then processed by the Customer Success Team and if necessary also forwarded to Starmind’s in-house Data Protection Officer. Furthermore, users can actively suppress their association with certain topics, or add additional topics as aspirations. Users who were selected as experts can decline the request, and users who were not selected can still answer questions. Both actions help to tune Starmind’s AI for future decisions.
Reflexiveness
Starmind collects and processes data in a transparent manner, and Starmind’s AI is able to cope with changes, limitations, and inaccuracies in the data.
Starmind can process user data from within its own application, as well from any additional learning sources that are chosen by the customer. All data is processed in GDPR compliant ways. In particular, Starmind never processes data that would be considered "private" by its users, such as private emails.
Changes like new company-internal wording are picked up by Starmind, and because of its balanced learning and forgetting algorithms, newcomer experts can be recognized alongside long-standing experts. If a user changes focus, for example because of a new position, their associated topics are adapted accordingly.
Responsiveness
Starmind's AI is able to adapt to external factors, such as changes in society.
In recent years, Starmind's customers have faced dramatic changes to the way people in their organization collaborate, in particular the sudden shift to working from home during the pandemic. Starmind's AI wasn't just able to automatically identify experts for new topics that suddenly became important, the overall relevance of Starmind in the organizations also grew due to the increased need for knowledge exchange between people working in different locations.
Auditability
Customers are able to verify that Starmind works as expected, and that Starmind's AI only uses appropriate data sources.
Starmind's AI is well-documented on these pages. The data that Starmind uses to learn expertise comes directly from the customer and from their end-users, and therefore it can be audited at any time. The language models in Starmind use well-established methods and datasets, that are audited against bias by independent research institutions.
These six guiding principles are based on academic research by Krishnakumar and Stilgoe et al.
Further reading on this topic can be found in our blog post on dealing with bias in AI systems.
Updated over 2 years ago