
Introduction
On 22 October 2024, people from across Ireland joined a People’s Inquiry to share testimony on how Big Tech corporations have impacted their lives, families and communities. The Inquiry was organised by Uplift, and built on the organisation’s Big Tech survey where more than 1,000 people shared heartbreaking stories about Big Tech’s harms.
The Inquiry in Dublin was the first of its kind, connecting the voices of people who have suffered harms at the hands of Big Tech with a panel of experts to send a clear message to the incoming government of Ireland:
We need brave political leadership to stand up to powerful tech companies and build a safer, better internet for us all.
Big Tech giants such as Google, Meta, TikTok and X have unprecedented power, and we increasingly rely on their platforms and services in our daily lives.
But for too long, governments have allowed these corporations to generate enormous profits, without holding them to account for escalating harms to people, the environment and democracy.
The Inquiry heard testimony across four interconnected themes: health and mental health; marginalised communities and online hate; data misuse and climate change; and threats to democracy.
The Expert Panel was comprised of Chairperson Noeline Blackwell (Online Safety Coordinator for the Children’s Rights Alliance), alongside Liz Carolan (TheBriefing.ie), Tanya O’Carroll (Independent Tech Policy Expert), Dr Ebun Joseph (Special Rapporteur for the National Action Plan Against Racism) and Ian Power (SpunOut).
“We have known about the problems with social media for over two decades now, but we’re still waiting for political leaders to use their power to make it safe” – Noeline Blackwell
Overview: Big Tech’s Power and its Systemic Impacts

The Inquiry highlighted how Big Tech’s power and business model is at the root of the companies’ harms on people’s lives. As Chair Noeline Blackwell stated in her introduction:
“In the urge to make as much money as possible, to build your empire as large as possible, to be top dog in the industry of this century so far… you risk not producing safe products” – Noeline Blackwell
Big Tech’s concentrated power and control over critical digital infrastructure such as operating systems, search engines and social networks enables them to relentlessly pursue profit without fear from competitors, regulators or a user backlash.
The dangers of allowing unaccountable billionaires to maintain centralised control over core digital services like social media networks have now been laid bare. Elon Musk has turned X into a megaphone for far-right hate and conspiracy theories, while Mark Zuckerberg has drastically rolled back safeguards on Meta’s platforms, allowing more misinformation, and hatred against women, migrants and transgender people.
Big Tech’s profits are significantly derived from an exploitative advertising-driven business model that relies on constant surveillance of our digital lives. This business model has repeatedly led them to put profits before safety, with algorithms that encourage engagement and addiction.
The environmental impacts of Big Tech go hand in hand with online harms. The exponential demand for energy and water driven in part by digital advertising and the AI boom means Big Tech is accelerating the climate crisis all just to enable more ads, addiction and hate.
Ireland’s reliance on the economic benefits of Big Tech’s jobs and investment means the companies’ lobbyists exert massive political pressure on policymakers and regulators to check any fundamental challenge to their core business model.
For decades, governments allowed Big Tech to enjoy an era of self-regulation. The companies may finally face accountability through new laws such as the EU’s Digital Services Act and Ireland’s Online Safety Code. Proper enforcement of these laws is critical, but governments need to go further and regulate Big Tech’s power, in the interests of public and democratic safety.
“Are we going to see brave leadership? In Ireland we are a global digital superpower, but we are not taking our responsibility seriously” – Liz Carolan
People derive real value from Big Tech’s platforms and services, relying on them to engage with the digital world, access public services, and stay connected with friends and family. But too often, people feel trapped, as switching or leaving these platforms is difficult and shuts them out of personal networks and closes access to critical information.
Minoritised communities and vulnerable people are more at risk, particularly on social media. The Inquiry heard how migrants and people seeking asylum, LGBTQI+ people, children and young people, and women and girls are disproportionately affected.
People shouldn’t have to deal with these systemic problems on their own. They shouldn’t be forced to make a false choice between quitting digital services or accepting the harms that go with them. Governments’ role is to protect people from these harms. A safer and rights-respecting digital future is possible.
“We know what the solutions are; what we need now is the political will to enforce laws” – Tanya O’Carroll
Health and Mental Health

The Inquiry heard testimony on how social media platforms are having disastrous effects on some people’s health and mental health, causing anxiety, depression and even leading to suicide. Big Tech companies are knowingly enabling the widespread dissemination of content that is causing ill health and is dangerous.
“I’m being fed perfect unattainable bodies and thinking ‘why can’t my body look like that?’ It hurts being online — sometimes it just hurts” – April Tambling
Saoirse, whose name has been changed to protect her anonymity, struggled with an eating disorder, but dangerous “thinspo” information shared online made the situation much worse, “helping normalise things that aren’t normal”.
This is not just a failure of the companies’ content moderation policies and enforcement. The platforms are designed to keep us engaged and addicted, to generate more advertising profit for the companies. This leads to more extreme content being promoted and pushed into people’s feeds by the platforms’ recommender systems.
“Unseen algorithmic hands are moulding our hearts, minds and bodies — especially children’s — and this infringes our rights” – Dr Mimi Tatlow-Golden
Marginalised groups such as disabled people and LGBTQI+ people find positive and meaningful communities and connections online. But these groups are more at risk of hate and abuse, and so are often forced off the platforms and excluded from online spaces.
Tate Minish, a college student, testified how social media has enabled him to learn about mental health, find out about his own ADHD diagnosis and connect with the queer community.
But these positives were outweighed by the toll social media took on Tate’s mental health. He finally made the difficult decision to quit social media, and although that “felt like living under a rock”, it removed “a daily stream of negative thinking patterns”.
“It often felt like Big Tech didn’t want me to be a functional person.” – Tate Minish
Molly Hickey, a non-binary youth advocate, testified that as a non-binary person, they receive a lot of death threats and discrimination online because of their gender. But anytime they raise this with Big Tech platforms, they are told to delete or take down their own video or post. As such, Molly said “I don’t feel I can be included in online spaces.”
There is a growing body of evidence of the harms social media platforms pose to children and young people.
According to recently disclosed evidence from a US lawsuit against TikTok, the company’s own internal research found that it takes just 35 minutes for a TikTok user to form a viewing habit, and the company was aware that compulsive use by young people harms their mental health — but the measures they introduced are ineffective. TikTok responded that the evidence is “cherry-picked” and “outdated”.
“Unless we make children’s safety a central part of the design consideration of online environments they will continue to face risk and harm.” – Alex Cooney
Alex Cooney, CEO of CyberSafeKids, outlined severe mental impacts to children and young people caused by cyberbullying, reduced sleep and being exposed to extreme content.
In one instance, a Support Worker contacted CyberSafeKids in relation to Joanne (name has been changed), a 13 year old girl who was being bullied via multiple social media accounts, primarily on TikTok, leading to her self-harming and refusing to leave the house. The organisation reported the incidents to TikTok via an acceleration channel and eventually the offending content and accounts were removed. But without such an intervention, it is hard for people to know what steps to take and to get the companies to take action.
Children and young people are particularly susceptible to invasive online advertising. Dr Mimi Tatlow-Golden, Professor of Interdisciplinary Studies of Childhood at the Open University, shared a study of the impact of online food marketing, which found that young people look at and share more ads for unhealthy foods, even more than the other brands or activities they care about such as sports stars and influencers.
In all these examples, people are forced to navigate the harms of the online world alone, without Big Tech taking action to make their platforms safer or the government holding the companies to account. The platforms must be made ‘safe by default’.
“It’s similar to climate change, where we spent decades putting the focus on individual actions, such as turning off light switches. We need to move away from individual responsibility — it’s the state’s responsibility, not individuals, to make the online world safer” – Tanya O’Carroll
Marginalised Communities and Online Hate

The design of Big Tech’s social media platforms risks not only encouraging addiction and mental health harms, but also fuels the normalisation and spread of online violence and hatred against groups such as women and girls, migrants and people seeking asylum, and transgender people.
“How many of us need to die for them to do something?” – Jacob Sosinsky
Often these issues are presented as a simplistic trade-off between protecting people online and defending freedom of expression. But while rejecting censorship on social media platforms is vital, failing to properly deal with online hate also crushes the freedom of expression of the people and groups being targeted, as they are forced out of online spaces.
“Minoritised groups are now feeling in danger when they speak publicly online. People are not willing to come forward because they are afraid.” – Dr Ebun Joseph
Ruadhán Ó Críodáin and other trans people find the online space to be “utterly hostile, unsafe and deeply toxic”, and it’s often just not feasible for trans people to exist in online spaces.
Jacob Sosinksy, a youth worker and activist, testified how, as a trans man, he was forced to delete his social media accounts because of the hate he received “just for existing”, affecting his mental health, as well as that of his family and friends.
As a youth worker, Jacob has seen LGBTQI+ people bullied online, trans people being deadnamed, and young people on the verge of suicide. The platforms’ automated content moderation systems are failing to remove harmful content, yet when people report it, the platforms often say there is “no violation”.
Sharon Mpofu, a member of MASI, the Movement of Asylum Seekers in Ireland, reported on the fears of people from migrant communities who are subject to a barrage of hate speech on social media platforms. Young people are being bullied online and at school and spat on in the streets.
Sharon told how in her own village, a far right leader came to a local community meeting after false rumours were spread about a local hotel being renovated for migrants. She was afraid to attend the meeting for fear of being targeted as the only person of colour.
“We are the victims, the fall guys for any of the bad things that are happening” – Sharon Mpofu
Harmful narratives against migrants and people seeking asylum are driven in parallel by a rapid growth in far right organising and incitement to hatred online, leading to the Dublin riots of 2023 and again in Coolock in July 2024. This contributes to a reactive shift of mainstream politics to the right, as social media platforms create an illusion of more mainstream support for extremist narratives than is actually the case.
Edel McGinley, executive director of the Hope & Courage Collective (H&CC), explained how misinformation and rumours about migrants are shared by a small network of far-right extremists and then amplified by the platforms’ algorithms serving as “online hate megaphones”. H&CC has documented people’s direct experience of far right organising in communities across Ireland, and how digital platforms are key to the far right playbook and help feed the cycle of hate.
“What extreme event has to happen for politicians to take recommender systems seriously?” – Edel McGinley
The Inquiry exposed how women and girls are targeted with hate and violence online. Journalist Alexandra Ryan was a victim of revenge porn over a ‘sex tape’ filmed without her consent, leading to online harassment and blackmail. She made the difficult decision to tell her story publicly, saying “every day was torture. I would go to sleep crying and wake up crying.”
Alexandra demonstrated how progress can be made when the government is forced to act. In 2021, ‘Coco’s Law’ was passed, criminalising the sharing of intimate images online without consent, together with successful public information campaigns. But there remains a gap between the enforcement of criminal law in the real world and online.
“It makes no sense that it’s a crime to tell someone in person ‘I’m going to kill you’, but if you put that online then nothing will happen” – Alexandra Ryan
Climate Harms

The economic drive to serve Big Tech’s growing demands for more and more energy and water means Ireland is at risk of breaking its legally binding climate commitments and the will of the people, embodied in the 2021 Climate Act.
“We’re failing young people twice — we’re not protecting them now and we’re not safeguarding the environment for them for the future” – Ian Power
Jennie C. Stephens, Professor of Climate Justice at Maynooth University, outlined four specific ways Big Tech is fuelling the climate crisis:
- Big Tech’s ballooning energy demand, with more global emissions than aviation.
- Big Tech’s business model which encourages consumerism, while their advertising model is unsustainable.
- Big Tech’s algorithms are spreading climate misinformation online.
- Big Tech’s lobby power is setting the policy agenda and research priorities in their interests.
Sorcha Tunney of Friends of the Earth and Eoin Brady of FP Logue Legal testified that energy thirsty data centres are being rapidly rolled out in Ireland, including to feed the rising demands of Big Tech’s AI gold rush and unsustainable large-scale AI models.
The Irish government proclaims that it is the best place in the world for data centres and is already the third-largest region with 82 data centres. 31% of Ireland’s total energy consumption is predicted to be from data centres by 2027.
Yet the government has no national strategy about this roll-out and there is a lack of transparency. Eoin Brady called it a “Wild West situation”, where no agency is monitoring the climate emissions of data centres, and the responsibility falls on local authorities.
There isn’t enough clean energy to meet the demands from these data centres, so that they are cancelling out Ireland’s investment in renewables. They also require enormous water consumption.
“The massive growth of data centres is like going down an escalator the wrong way. Ireland risks being the data dumping ground of Europe” – Sorcha Tunney
Data centres will face legal challenges, with the first successful case recently being brought against a proposed data centre in Ennis in Co. Clare. But campaigners are fighting against a lobbying goliath in Big Tech and the data centre industry.
Ultimately, who is this massive resource consumption for? Will communities see any benefits, or is renewable energy just being diverted straight to Big Tech? Eoin Brady argued that when farmers and others see that one sector is effectively exempted from emissions requirements because of their political influence, “that is a recipe for social strife”.
This raises important questions about Ireland’s long-term economic strategy and national business model, with the country being locked into energy-draining infrastructure into the future, all for uncertain Big Tech investment.
Reducing Big Tech’s resource consumption aligns with measures to tackle the online harms linked to their platforms. Measures such as less bandwidth for video calls and stopping infinite scroll will reduce the energy demands of Big Tech platforms.
Threats to Democracy

Social media networks have become a major source of political information and news. But Big Tech controlled platforms are not neutral spaces; they are shaping political debate and the information people see in ways that pose a threat to democracy.
The Inquiry heard how people and communities are silenced on political grounds, toxicity and abuse online has a chilling effect on politicians, and we don’t know who is buying influence through opaque targeted political advertising. However, there is an alternative: decentralised social networks.
Senator Eileen Flynn, a member of the Traveller community, testified about the lack of diversity in politics in Ireland and the challenges faced by minority communities to enter politics. The situation is made more difficult by hateful comments and misinformation online and the backlash they face when speaking out.
“Tomorrow, I have a speech, but already I’m thinking about the impact it will have in the media and social media because of hate crime…I am afraid, but I’ll still do it” – Senator Eileen Flynn
Janet Horner, a councillor representing Dublin’s North Inner City, received a wave of abuse on social media in the lead up to the local elections in June, and was physically attacked and threatened in the street. Janet said the tech companies must be regulated so that they can’t “distort discussion and platform hate which shuts down free speech, all the while purporting to be champions thereof”.
Taysir Mathlouthi, EU Advocacy Officer at Palestinian digital rights organisation 7amleh, testified on the systemic silencing of Palestinian and pro-Palestinian voices by social media platforms, both prior to and since the October 7 attacks. This is not only happening in Israel/OPT, but internationally too, including in the EU.
The platforms are not moderating content fairly, but permitting and profiting from hateful and extreme content targeting Palestinians. Research by 7amleh found that Meta accepted 20 adverts containing extremely violent language against Palestinians. Meta subsequently apologised saying it was a mistake, but in fact this is happening systematically.
Jennifer Waters, a political researcher at UCD Centre for Digital Policy, set out the findings of academic research to test Meta’s transparency measures for political advertising. The researchers found significant discrepancies and inconsistencies, and that political ads from multiple credit cards can be attached to the same ad account so that “we don’t know who is paying for these ads”. The fundamental problem is that these transparency measures are based on self-regulation:
“These are not bodies that have to adhere to standards, they have no standards for their own legitimacy for overseeing how political ads are spent in Ireland.” – Jennifer Waters
But we do not need to rely on the platforms controlled by Big Tech: an alternative model for the digital world is possible.
Aaron Rodericks, who used to co-lead trust and safety at Twitter but now works at Bluesky, set out how the decentralised social media network Bluesky provides a concrete example of how things could be done more democratically, without being reliant on an advertising-based business model, and where users can freely choose their own moderation tools and algorithmic recommendation systems.
“This is a way of empowering people to have more say in the information space” – Aaron Rodericks
Conclusion and Recommendations

The Inquiry exposed a shocking litany of Big Tech’s harms on people, society and the environment.
“Social media is adding fuel to the fire of the poly crises around the world today” – Anna D, spoken word poet
But these are no longer new issues: Governments and the companies themselves cannot pretend that they are not aware of Big Tech’s harms, and they already have the tools to address them and create a safer digital world.
“People and democracies should be making decisions about our digital space rather than billionaires who have nothing but their own interests at heart” – Ian Power
The time for properly regulating Big Tech and ensuring a safer internet is long overdue. As the European headquarters for many major tech companies, Ireland has a unique role and responsibility in the EU and globally, yet successive governments have neglected this responsibility. The new incoming government of Ireland must:
- Challenge Big Tech’s power and exploitative business model through robust enforcement of existing regulations such as the GDPR, the Digital Services Act, and the new Online Safety Code. This includes:
- Enforce GDPR to stop the data free-for-all in online advertising and the misuse of intimate data by recommender systems and commercial interests.
- Resource and support Coimisiún na Meán to robustly enforce the new Online Safety Code and the Digital Services Act to ensure people can easily navigate complaints process for non-removal of illegal and harmful content.
- Protect everyone from toxic algorithms and addictive design on social media:
- Require social media recommender systems to be profiling-free and safe by default.
- Free social media networks from Big Tech control, including by requiring platforms to make their recommendation systems customizable for users.
- Align digital industrial policy with the public interest:
- Apply strict accountability, labour and climate conditionalities to ensure any public investment in AI and digital infrastructure does not fuel harms, conflict with climate targets, or further entrench Big Tech’s concentrated power.
- Counter the energy crisis:
- Impose an immediate moratorium on data centre connection and expansion until they can operate within our climate limits, and protect energy and water use for the public good.