Summary
Taiwan’s first Digital Minister, Audrey Tang, highlights the country’s approach to developing safe, sustainable and citizen-led AI as a model to revitalise global democracy. Amid global instability and the rise of misinformation, Taiwan has shown resilience through a ‘whole-of-society’ commitment to democracy; its Ministry of Digital Affairs led efforts to secure critical infrastructure against cyber threats and employed anticipatory debunking to maintain information integrity. Tang argues that leveraging collective intelligence is crucial for effective AI regulation, promoting societal cohesion and democratic values.
Reading time
12 minutes
The ‘Taiwan Model’ offers a playbook for using safe, sustainable and citizen-led AI to revitalise societies worldwide.
Global economic and security instability is placing our free and open societies under tremendous pressure. Not since the 1930s, when the Great Depression and civil turmoil dominated a decade of darkness leading up to World War II, have governments faced such uncertainty. Extremism, isolation, polarisation and populism — amplified by social media and the 24/7 news cycle — are reshaping the geopolitical landscape in ways favourable to authoritarian regimes.
With India and the US, the world’s largest democracy and economy, respectively, going to the polls in 2024 — along with nearly 40 other countries such as Taiwan, Indonesia, Mexico and Pakistan — there is not a moment to waste in recognising the misuse of artificial intelligence (AI) in amplifying election-related risks via deepfake videos, echo chambers, micro-targeting and undermining information integrity. Indeed, these tools and tactics are already being used in attempts to sway opinions and create confusion.
What is needed is the collective courage to wrest back control of the narrative by reinvigorating democracy, as well as restoring faith in our democratic institutions and rules-based order. Co-creation is increasingly seen by the public, private and civic sectors as the best means of paving the way for humankind through the 21st century and beyond.
The people must be given a fighting chance to understand how AI systems reply to political questions, the role of model developers in shaping replies, whether models are biased and the meaning of outputs. We cannot ignore the fact that lowering the cost of political persuasion threatens to negatively impact the electoral landscape, exacerbating existing divides and creating different information ecosystems.
Doubling down on democracy
I am proud to share that Taiwan was quick out of the ballot box blocks in January this year with smoothly staged presidential and legislative elections — despite insidious efforts of bad actors to sow the seeds of division and discord. The people demonstrated that free and fair voting is the ideal antidote for the ills of authoritarianism. They also showed the world what can be achieved through a whole-of-society commitment to doubling down on democracy.
The Ministry of Digital Affairs, or ‘the moda’, cooperated closely with other Taiwan ministries and agencies to heighten vigilance in the lead-up to the elections. This was essential given the number of cyberattacks against Taiwan increased more than six times year-on-year and more than 33 times compared to the same quarter in 2022, according to US IT company Cloudflare. The reality is Taiwan faces more and more cyberthreats by the day, with over 40% categorised as intrusion attacks.
Stable operation of critical infrastructure and key websites was ensured by the moda through drills and tests, safeguarding systems and establishing a 24/7 rapid response team. Each distributed denial of service (DDoS) attack was logged, analysed and acted upon. This approach, complemented by frontline monitoring, proved effective, as evidenced by the 22% drop in DDoS incidents compared to 2022.
Anticipatory debunking, or pre-bunking, was another secret of Taiwan’s success. Cofacts, a crowdsourced platform set up by g0v (gov-zero) — a decentralised civic tech community enshrining core values such as cooperation, information transparency and open results — played a central role in ensuring the integrity of online information.
Malicious and innocent reports alike were studied and assessed on the basis of accuracy and persuasiveness. With the assistance of community-trained AI systems, the results were quickly released, allowing the people to make informed judgements on the veracity of content.
As a responsible member of the international community, Taiwan leads in sharing its democracy-related experience and know-how. This approach centres on giving back while engaging with like-minded partners. It plays an important part in ensuring that this island of resilience and its 23 million freedom- and democracy- loving people can contribute meaningfully to tackling issues of global significance.
Governing AI
Safe and sustainable development of AI systems is one of the many areas in which Taiwan can help. AI systems are machine-based; for explicit or implicit objectives they infer from the input received how to generate outputs such as content, predictions, recommendations or decisions influencing physical and virtual environments. Levels of adaptiveness and autonomy among AI systems vary after deployment.
Global governance of AI systems must be a race to safety, not a race to power. A democratic approach — as opposed to a technocratic one — is the optimal answer for what is an ethical, political and societal conundrum. This encapsulates my personal mantra of ‘deep listening and taking all sides’, recognising that intelligence stems from the mind and spaces between people.
To this end, the moda has advanced Alignment Assemblies with the Collective Intelligence Project (CIP) and world-class partners such as Anthropic, OpenAI, The GovLab and GETTING-Plurality research network. Everyday citizens are invited to co-govern AI in the context of information integrity: protecting users from harm; detecting and labelling AI content; requiring digital signatures for advertisers; making AI systems transparent; implementing citizen oversight of fact-checking; and ongoing monitoring of AI incidents.
The genesis of this deliberation lies in vTaiwan, an online-offline consultation bringing together government agencies and ministries, as well as academics, business leaders, civil society organisations, citizens, experts and lawmakers. Supported by a selection of collaborative open-source engagement tools, the 2014-launched process enables stakeholders to freely and openly exchange opinions on formulating or revising legislation.
Building consensus
At the heart of vTaiwan is Polis. This real-time system gathers, analyses, interprets and visually maps in clusters of consensus what large groups of participants think. It is used to address a host of important but generally under-the-radar issues such as copyright, bias and discrimination, due compensation, fair use, public service and broader societal impacts. Its allure lies in a simple yet profound design: people naturally gravitate towards finding common ground, rather than delving into divisive issues.
An innovative aspect of Polis is the absence of a reply button. If participants can propose ideas and comments without going back and forth on trivia, it tends to eliminate the troll factor. This produces a value-added result as the focus is on expressing ideas that will garner support from both sides of a divide. Gaps are naturally narrowed by not wasting time on off-piste statements.
In Taiwan, Alignment Assemblies are already laying the foundations for consensus among the people regarding global governance of AI systems, while addressing common challenges and concerns collectively. Through the 111 SMS number, hundreds of thousands of randomly selected citizens were invited by the moda in March this year to co-create guidelines for AI evaluation in the context of information integrity. (111 was set up by the moda to serve as a trustworthy source of government information, reducing the risk of SMS fraud and further strengthening digital resilience.)
... intelligence stems from the mind and spaces between people.
The topics, pertaining to large platforms and serving as a roadmap for policymakers, are: automatically detecting and labelling posts containing AI-generated content; notifying users exposed to falsehoods post facto and providing them with context; assigning a unique anonymous digital ID to each user to ensure content provenance and accountability; ensuring system transparency; implementing citizen oversight and independent evaluation of fact-checking mechanisms; including information integrity as a criterion for AI model standards; and assessing the effectiveness of information analysis and recognition tools in AI products and systems through generative AI labelling functionality.
This deliberation is enhancing societal resilience, ensuring the people have the capacity to understand and direct the role of AI systems in daily life. After all, innovation comes from co-creation among unlikely collaborators, and governments should employ inclusion and radical transparency in trusting the people.
Governments should employ inclusion and radical transparency.
Collective intelligence
The March deliberation springboarded off a moda–CIP Alignment Assembly in 2023. The online component and two in-person deliberative workshops took place in Taipei and Tainan cities. Also known as ‘Ideathons’, the events are a way of promoting the future development of Taiwan’s digital industry. They allow everyone to imagine life in the future. The objective is to gather a collection of innovative ideas from the people and, in the spirit of open government, build on them to influence policy formulation and promote industrial development.
It was found that the people want to empower workers to develop their skill sets and upgrade AI competence across all sectors. Notably, they want the public sector to play a pioneering role in fine-tuning and deploying local AI. For all intents and purposes, unnecessary trade-offs between the rapidity of rollout and safety are unacceptable
when it comes to transformative technologies. Progress can only be achieved when they are grounded in participation: to build AI for the people, with the people.
Leveraging society’s collective intelligence is the best way of obtaining more accurate determinations of how AI is impacting the world. A diverse group of people — builders, everyday people, experts and policymakers in many different fields — feeding into decisions about such consequential technology is vital for making the right decisions. We must never lose sight of the fact we can learn from one another.
Risk response — and responsibility
Another priority area for global governance of AI systems is social media harms. Isolation and polarisation are symptoms of the absence of credibly neutral institutions in society. The moda is erecting bridges between users and social platforms by encouraging the latter to take greater responsibility for content. If a social platform in Taiwan is used to perpetuate scams, which are flagged and not taken down promptly resulting in financial loss, the parent company is liable for the damages suffered by the users. This re-internalises negative externalities, ensuring the company shares the burden of harms if it does not vet the harms.
The recently established AI Evaluation Center (AIEC) is an additional example of how the moda is supporting global governance of AI systems. As a jumping-off point for comprehensive evaluation of related risks, AIEC combines safety research and development with innovative mechanisms for collective decision-making. Before large-scale damage occurs, steps can be taken to prevent harm and, at the same time, let the people understand how to mitigate the risks in advance.
Alignment Assemblies can also be employed in adjudicating AI risks and harms. One of the most topical is the persuasive power of large language models (LLMs). Studies show that LLMs with access to personal information are far more effective in changing participant opinions than humans are. This opens the door to advancing false or misleading narratives online, particularly via micro-targeting. The legitimate course of action in this case is to recognise the perils of persuasiveness by assessing acceptability and risk tolerance.
Once an area of general risk is prioritised, and there are one or more high-quality evaluations for this area, the next step is to understand what to do in the case of various evaluation outcomes. In particular, it is critical to understand a proportionate response based on these results.
One option is to create a standing panel, starting with domain experts in relevant areas, that can be asked to adjudicate on a severity score for particular evaluation results. This severity score should give a sense of what actions would be proportionate. The adjudication processes can also be recorded in detail, to create a precedent for these rulings, which can be abstracted into general criteria. This can also take place in an international body, such as the UN Intergovernmental Panel on Climate Change.
Power to shape a shared vision of AI should not be exclusive to a handful of companies or economies. Collective intelligence processes democratise knowledge and serve as a powerful catalyst for bolstering mutual understanding. When the public witnesses the fruits of collaboration, it sparks a surge in co-creation, innovation and stakeholder engagement. This virtuous cycle is key to revitalising democracy, ensuring we leave no one behind in tackling emerging challenges to our societies and communities.
The Taiwan Model
In time, Taiwan-facilitated AI norms shall become part of the gold standard, further advancing the country’s standing as a trusted and reliable partner. Global governance of AI systems must not hinge on the unilateral decisions of a few companies reflecting the views of specific groups. It is necessary for cross-sector stakeholders to work collaboratively, so the result can be relied upon in total confidence by every member of the family of nations.
The Taiwan Model, which is an amalgam of the aforementioned approaches and pillars, recognises AI systems as a force for good. It also uncovers opinions and perspectives on an array of issues with the end goals of promoting transparency and moving beyond division to create consensus. The answer lies in ‘Plurality’, or technologies for collaborative diversity, to increase the bandwidth of democracy.
Our mission is to send a strong message that hope is possible and all is not lost when it comes to forging a fresh outlook on the global governance of AI systems. Opportunities can be capitalised upon, and risks mitigated, if the people are given the chance to participate in policymaking processes and, by extension, strengthen societal cohesion.
The clock is ticking on charting a safe, sustainable and viable course for the global governance of AI systems. By dialling up the chorus of voices, as well as harnessing the synergy of the Taiwan Model for the collective good, the world can mitigate risks and maximise benefits. Let us muster up the courage to make 2024 a year of the democratic bounce back and free the future — together.
Audrey Tang is a passionate Pluralista who served as Taiwan’s first Digital Minister.
Jude Buffum is an award-winning designer and illustrator from Philadelphia.
This feature first appeared in RSA Journal Issue 2 2024.
pdf 12.5 MB
Read more lead features from the RSA Journal
-
VNCCII: Cosmic communities
Samantha Tauber
What can VNCCII, the fictional intergalactic superheroine alter ego of Samantha Tauber, teach us about the creative communities that need nourishing today?
-
Save the the planet, eat the world
Karina Montoya
From deforestation detection to disaster response, AI applications developed by Big Tech show promise — but at what cost to our environment, asks Karina Montoya.
-
A world without the RSA
Anton Howes
The Society celebrates its 270th birthday on 22 March. But what if it had never been born?
Join the discussion
Comments
Please login to post a comment or reply
Don't have an account? Click here to register.
If I understand this article correctly, Taiwan is a country which is taking the implications of AI seriously and doing something about them, rather than simply discussing Ai's possible dangers. in a fairly abstract manner. Other countries take note!