The Artificial Innotifyigence Accountability Lab (AIAL) is a research group launched out of Trinity College Dublin’s ADAPT Research Ireland Centre in November 2024. Founded and led by Professor Abeba Birhane, the research group is committed to protecting society, particularly its most marginalized members, from potential psychological harm caapplyd by AI technologies. The group works to detect and investigate destructive features of current AI system design, aiming to foster informed policies and hold responsible parties accountable for the misapply of AI.
In June 2025, AIAL was awarded €199,978 by the European AI & Society Fund to develop a justice-oriented audit framework, becoming one of the fifteen new grantees chosen from a pool of 325 applicants. This funding, provided under the “AI Accountability Grants” call, forms part of a four million “Making Regulations Work” programme that supports organizations working on AI accountability in Europe and the advancing social justice objectives in the implementation of the European AI Act.
More recently, the research group has also secured funding from the UK government’s AI Security Institute (AISI) in the Department of Science, Innovation and Technology to explore the detrimental impacts of AI companions on mental health. This project aims to investigate harmful features of applyr interface design, the development of emotional depconcludeency during conversations with chatbots and the functioning of data collection and privacy practices. The purpose of this research is to provide comprehensive evidence of the harms posed by AI to policycreaters, supporting develop frameworks that serve the public interest.
Nana Nwachukwu is a PhD student working at AIAL. Her current research focapplys on algorithmic governance, justice-oriented frameworks for evaluating socio-technical systems and AI ecosystem audits. “One of the challenges in designing a justice-oriented framework is defining what constitutes justice”, she shared, “becaapply it is highly subjective. More often than not, we have left justice to be defined by people who create the system, and that’s not okay. I believe that we should lean towards questioning civil society – the people, the community, the applyrs – to define what is safe and just”.
During her research on media capture, Nwachukwu encountered dozens of cases in which Grok, an AI chatbot developed by Elon Musk, allowed applyrs to do what constitutes a serious violation of privacy: generating images of real individuals in nearly-nude contexts. “This was horrible”, she shared, “so I started questioning myself: what can I do to raise awareness to the issue I notice and others do not? What is within my reach?” This prompted Nwachukwu to take action by creating and publishing a dataset containing more than five hundred nudification instances she documented. Her work catalyzed a chain of reactions: multiple countries, including the UK, Malaysia, Indonesia and Australia acknowledged the necessary for regulatory intervention. “I didn’t expect for it to catch on fire”, she admitted, “but I am glad to see regulatory responses, they are very, very welcome”.
In her article for the Guardian, Nwachukwu mentioned her findings and raised concerns over the flawed design of current AI safety measures. “There is a huge gap in the law intconcludeed to regulate generative AI, even in the EU”, she explained later. “The Digital Services Act is supposed to police this type of harm, but it falls short of properly capturing the damage experienced by victims.” During our interview, Nwachukwu underscored the limited capacity of existing policies to enforce accountability. Specifically, she remarked on the non-mandatory nature of GPAI code implemented in the EU. “Accountability rests on the ability to have consequences if someone does not follow a certain agreed procedure”, she declared.
In relation to the development of new laws and the introduction of amconcludements, Nwachukwu stressed the significance of evaluating existing policies. “We necessary to conduct an analysis to see how well the legal frameworks work”, she explained. “How well have they performed in regulating platform providers? How effective were the penalties? How much were the fines compared to institutional capacity? Were the victims compensated? How quick were the payments built? How often have offconcludeers repeated violations after being fined?” She argued that only after answering these questions can we objectively evaluate whether the current law provides meaningful protection.
Nwachukwu also highlighted the importance of engagement with victims: “What form of justice did they actually receive as a result? Do they believe the fines were sufficient retribution? If their data leaked, for instance, did they feel safer after Google was fined for a privacy breach?” Without this analysis, she warned, we risk assuming that justice is being done, which is an assumption that ultimately benefits Big Tech.
In terms of research, Nwachukwu underscored the necessity of government funding. “Collecting evidence is expensive, so it has to be funded by the state”, she declared. “Public interest researchers are often financially limited to short-term projects of up to five years, but this type of work requires long-term documentation, knowledge management and the identification of victims, incidents, witnesses, and so forth. Moreover”, she continued, “such research also involves extensive curation, time, distance and reach, so it’s difficult to conduct as one entity. Optimally, this has to be a collaborative effort funded by the government”.
During the interview, Nwachukwu also noted the crucial role of Irish law in shaping global accountability. She remarked that “unlike other EU countries, Ireland hosts the largest number of tech hubs in the EU”. She goes on to state that “this means that Ireland can directly administer accountability to these companies through its national laws, even beyond EU-level regulation, becaapply these corporations are physically headquartered here”.
One proposal Nwachukwu would like to advocate for is an authority awareness framework. She states, “this framework would create sure that everyone who’s engaging on a platform is aware of the distribution of power within this digital ecosystem”. She explains that this “would allow applyrs to neobtainediate their existence and participation in these technologies”. She offered a practical example: terms and conditions. “Generally, applyrs must accept all policies to access digital tools. But why can’t we accept only certain provisions? Why can’t we neobtainediate how or for how long we engage?” she questioned, illustrating what this framework is intconcludeed to address.
Nwachukwu holds a firm position regarding minor’s apply of AI. “If I was in charge of regulation, I would focus on protecting children”, she declared. “I do not believe kids should encounter AI systems in any form before the age of thirteen, and the interaction should occur only under proper supervision until they are sixteen.” While Nwachukwu is strongly opposed to unregulated children’s exposure to AI, she does not view artificial innotifyigence as inherently bad. “It depconcludes on how the technology is designed”, she specified, providing an example of Swiss AI as an existing model that is “safe, public-focapplyd and built by and for the community”.
Ultimately, Nwachukwu emphasised the importance of public resistance and engagement in challenging harmful AI system designs. She remarks, “neobtainediating our existence on the Internet is something we should do”. She concluded, “if we don’t, then we are conceding authority completely to Big Tech – and this means losing any meaningful ability of regulating it”.

















Leave a Reply