What exciting opportunities exist at the intersection of 𝗽𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹 𝘀𝗰𝗶𝗲𝗻𝗰𝗲 and 𝗹𝗮𝗿𝗴𝗲 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 (𝗟𝗟𝗠𝘀)? Our recent work 𝗣𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹-𝗟𝗟𝗠 explores this space with surveys, case studies, and key future directions! Key highlights: 𝟭. 𝗧𝗿𝘂𝗹𝘆 𝗶𝗻𝘁𝗲𝗿𝗱𝗶𝘀𝗰𝗶𝗽𝗹𝗶𝗻𝗮𝗿𝘆 🌐 A collaboration between political science (@catherine-chen and Peng (Fred) Gui) and computer science (Yushun Dong and myself), supported by our students and collaborators from 30+ institutes worldwide. 𝟮. 𝗕𝗲𝘆𝗼𝗻𝗱 "𝗷𝘂𝘀𝘁 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝘀𝘂𝗿𝘃𝗲𝘆" 🔍 We conducted case studies on political bias and feature generation in LLM-driven voting simulations, evaluating both bias and the quality of LLM-generated political features. 𝟯. 𝗙𝘂𝘁𝘂𝗿𝗲 𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 🚀 We identify 6 critical challenges, including data scarcity, fairness in predictions, and explainability, to guide impactful work at the intersection of AI and political science. 𝗟𝗲𝗮𝗿𝗻 𝗺𝗼𝗿𝗲 𝗵𝗲𝗿𝗲: 🌐 Online resource: https://political-llm.org/ 📄 Preprint: https://lnkd.in/gAdEVFEk #politicalscience #machinelearning #ai #llm
Yue Zhao’s Post
More Relevant Posts
-
👏 🎉 Excited to announce my latest first-authored work, Political-LLM, collaborating with 30+ AI researchers & political science researchers from world-leading Universities and top Industries. Special thanks to Prof. Yue Zhao from USC and Prof. Zhengzhong Tu from TAMU for sharing and posting this research. Amid the rapid advancements in AI technology, large language models (LLMs) have found extensive applications in political science, including election forecasting, sentiment analysis, policy evaluation, legislative analysis, and international diplomacy. However, there remain significant unanswered questions about how to systematically understand the use of LLMs in the broader field of political science and address the technical challenges they face. To tackle these challenges, our team has developed the Political-LLM framework, offering a comprehensive summary of LLM applications in computational political science and outlining future directions for their development.
Assistant Professor of Computer Science at USC | Building Trustworthy, Scalable, and Generative AI Systems | Anomaly & OOD Detection (Creator of PyOD), Graph Learning, AI4Science, Open-source ML Tools
What exciting opportunities exist at the intersection of 𝗽𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹 𝘀𝗰𝗶𝗲𝗻𝗰𝗲 and 𝗹𝗮𝗿𝗴𝗲 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 (𝗟𝗟𝗠𝘀)? Our recent work 𝗣𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹-𝗟𝗟𝗠 explores this space with surveys, case studies, and key future directions! Key highlights: 𝟭. 𝗧𝗿𝘂𝗹𝘆 𝗶𝗻𝘁𝗲𝗿𝗱𝗶𝘀𝗰𝗶𝗽𝗹𝗶𝗻𝗮𝗿𝘆 🌐 A collaboration between political science (@catherine-chen and Peng (Fred) Gui) and computer science (Yushun Dong and myself), supported by our students and collaborators from 30+ institutes worldwide. 𝟮. 𝗕𝗲𝘆𝗼𝗻𝗱 "𝗷𝘂𝘀𝘁 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝘀𝘂𝗿𝘃𝗲𝘆" 🔍 We conducted case studies on political bias and feature generation in LLM-driven voting simulations, evaluating both bias and the quality of LLM-generated political features. 𝟯. 𝗙𝘂𝘁𝘂𝗿𝗲 𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 🚀 We identify 6 critical challenges, including data scarcity, fairness in predictions, and explainability, to guide impactful work at the intersection of AI and political science. 𝗟𝗲𝗮𝗿𝗻 𝗺𝗼𝗿𝗲 𝗵𝗲𝗿𝗲: 🌐 Online resource: https://political-llm.org/ 📄 Preprint: https://lnkd.in/gAdEVFEk #politicalscience #machinelearning #ai #llm
TriForce
political-llm.org
To view or add a comment, sign in
-
I recently edited The Global Public Opinion on Artificial Intelligence survey (GPO-AI), which examined opinions about artificial intelligence (AI) in 21 countries. Created by the Schwartz Reisman Institute for Technology and Society in collaboration with Policy, Elections & Representation Lab at the University of Toronto - Munk School of Global Affairs & Public Policy, I also enjoyed speaking to the research lead, Peter John Loewen, about the implications. So many issues that I'll leave for another post. What I can say for sure is this: public trust in AI varies widely and needs some attention from policy makers. Reports like this can provide policy makers with a method for identifying the lowest-hanging fruit for their efforts to get the ball rolling. We will get there. Loved working on this, and it's intended to be for public consumption, so please dive in! https://lnkd.in/dk4-cZc6
Global Public Opinion on Artificial Intelligence (GPO-AI) — Schwartz Reisman Institute
srinstitute.utoronto.ca
To view or add a comment, sign in
-
Data, Data, Data everywhere! What do we know about data, who owns it, who controls it and who governs it and how we also need a decolonial lens to understand the data flow from the Global South to Global North. Manuel Castells in his seminal work The Rise of Networked Society (1996) talked about this power struggle and emergence of ‘informational economy’. In modern days, we need a refreshed look at data grab. Colonialism has not disappeared – it has taken on a new form. In the new world order, Big Tech companies are grabbing our most basic natural resource – our data – exploiting our labour and connections, and repackaging our information to track our movements, record our conversations and discriminate against us. Join Professor Nick Couldry, Professor Myria Georgiou of Department of Media and Communications at The London School of Economics and Political Science (LSE) and Professor Ulises A Mejias as we uncover this new power struggle. Check out the link below. #DataGrab #BookLaunch #DataFlow #LSE #PartofLSE
LSE MSc Media & Communications | GovTech, AI & Digital Media | Communications Strategist, Journalist and Researcher
Professor Nick Couldry of the LSE Department of Media & Communications is launching the book 'Data Grab: The New Colonialism of Big Tech (and How To Fight Back') with Professor Ulises Mejias of the State University of New York at Oswego on May 14th at The London School of Economics and Political Science (LSE). An insightful analysis of the impact of #BigTech development, and the implications of #datafication processes and #AI for the #GlobalSouth. In the following interview, Prof. Couldry and Prof. Mejias discuss their new book:
Q and A with Nick Couldry and Ulises A Mejias on Data Grab
https://blogs.lse.ac.uk/lsereviewofbooks
To view or add a comment, sign in
-
🌍 Exciting News for Democracy Research! 🗳️ We are thrilled to announce that the Peace Research Center Prague is part of a transdisciplinary project "Strengthening Democratic Resilience Through Digital Twins" (TWIN4DEM). Led by Erasmus School of Social and Behavioural Sciences (ESSB) the project received a €3 million Horizon Europe grant. Together with 11 European partners, this project will use cutting-edge Computational Social Science (CSS) methods like natural language processing and dynamic simulations to investigate how democracies decline and, more importantly, how we can prevent it. By creating the first-ever digital twins of political systems in Czechia, France, Hungary, and the Netherlands, the team aims to provide new insights into the causes and effects of democratic erosion. A huge thank you to our partners: Erasmus School of Social and Behavioural Sciences (ESSB), Université catholique de Lille, GESIS - Leibniz Institute for the Social Sciences, Fundazione Bruno Kessler, Linnaeus University, Babes-Bolyai University, MTA Társadalomtudományi Kutatóközpont, Eticas AI, Democracy International and DBC diadikasia. Looking forward to seeing how this innovative project unfolds over the next three years! 🚀 https://lnkd.in/eyzDUVPz. #HorizonEurope #DemocracyResearch #DigitalTwins #ComputationalSocialScience #ErasmusUniversity #TWIN4DEM #DemocraticResilience #Innovation
Strengthening Democratic Resilience Through Digital Twins | TWIN4DEM Project | Fact Sheet | HORIZON | CORDIS | European Commission
cordis.europa.eu
To view or add a comment, sign in
-
Information (whether true or false) has a social network effect that forms groups. This phenomenon will increasingly be influenced by AI which will intensify our information networks that determine our beliefs and behaviour even more. This is termed info-determinism. This will also further challenge expert’s authority - a key problem we are facing as university educators. Rejecting knowledge has become political - “no one can tell me what to think”. Many reject expertise and facts. To counter the intensification AI will bring - we need regulation and a “computer politics” that safeguards information. Like banning AI impersonations. For the moment I think AI is likely already shaping crucial information networks for our students and where we’re heading doesn’t appear progressive. A good read on info determinism here 👇 https://lnkd.in/ehsNp_Jw
Are We Living in the Age of Info-Determinism?
newyorker.com
To view or add a comment, sign in
-
While researching for one of my master's projects in Audiovisual and Multimedia Communication at the UC Dynamics of Social Media and Digital Media, I came across the article "From Poisons to Antidotes: Algorithms as Democracy Boosters" by Cavaliere & Romeo (2022). The text provided me with a critical analysis of the role of algorithms in democracy, a crucial issue in the digital society we live in. The article argues that algorithms can strengthen democracy, as long as they are implemented in a transparent, accessible, and auditable way. This would enable more efficient communication between government and the population, promoting citizen engagement and participation in political life. However, the text also acknowledges the challenges that accompany this promise. Algorithmic manipulation and discrimination are real concerns that need to be addressed to ensure fairness and equity in the use of these tools. The reading made me reflect on the role of algorithms in society and inspired me to seek solutions to the challenges they present. I believe that discussing this topic is fundamental for the future of democracy and digital communication. #AI #EU Cavaliere, P., & Romeo, G. (2022). From Poisons to Antidotes: Algorithms as Democracy Boosters. European Journal of Risk Regulation, 13(3), 421–442. doi:10.1017/err.2021.57 https://lnkd.in/dDJw93-Q
From Poisons to Antidotes: Algorithms as Democracy Boosters | European Journal of Risk Regulation | Cambridge Core
cambridge.org
To view or add a comment, sign in
-
As a community college librarian deeply involved in teaching AI literacy, digital literacy, and media literacy, I found this article to be both alarming and a poignant illustration of our current challenges. As educators and librarians, we are on the front lines of ensuring that our community is equipped with the necessary skills to critically assess and question the validity of the information they encounter. Our role extends beyond traditional literacy; it is about fostering an informed citizenry capable of navigating the complexities of a digital world saturated with AI-generated content. This responsibility highlights why the skills we teach—critical thinking, source evaluation, and digital literacy—are more essential than ever. As the article exemplifies, without these competencies, the public is at risk of being swayed by fabricated realities that could influence their decisions at the polls. In embracing these challenges, we reinforce the value of librarians in our society, proving that our work is vital in guarding against the threats posed by emerging technologies. Let us continue to advocate for the importance of media literacy education and empower our communities to discern truth in an increasingly artificial world. #DigitalLiteracy #AILiteracy #MediaLiteracy #Librarians #Education #Democracy #InformationLiteracy https://lnkd.in/gkd6iZUm
Deepfakes, distrust and disinformation: Welcome to the AI election
politico.eu
To view or add a comment, sign in
-
The ways in which researchers can artificially inflate their reference counts are growing. The ways in which researchers can artificially inflate their reference counts are growing. https://lnkd.in/gNdp5Jch
The citation black market: schemes selling fake references alarm scientists
nature.com
To view or add a comment, sign in
-
A new #Rutgers project designed to advocate to STEM-educated New Jersey state legislators (who often champion evidence-informed policies and serve as trusted sources of scientific information for their colleagues) for evidence-informed AI regulation has been launched by two Rutgers faculty members: Anna M. Dulencin, director of the Rutgers Eagleton Institute’s Science and Politics Program, and SC&I Professor of Communication Itzhak Yanovitzky. “Scientists must work together with policymakers and the public to shape sound AI policy that harnesses its potential to benefit individuals and society while placing checks on its potential to cause harms. There is already a great deal of insight from across scientific disciplines and professional practice fields that can productively inform policy discourse but do not effectively reach federal and state policymakers,” they wrote. Read more: https://lnkd.in/e2uxAmKQ #RutgersResearch #RutgersExcellence
Role of State Legislators with STEM Backgrounds in Advancing Evidence-Informed AI Legislation in U.S. States
comminfo.rutgers.edu
To view or add a comment, sign in
-
AI in Politics: TalkBot Generates Answers, Sparks Controversy #AIdisruptingpoliticalnorms #AIincampaignstrategies #AIinpolitics #AIgeneratedanswers #engagementfromopponents #ethicsofusingAIinpolitics #polarizationinsociety #TalkBot #technologyandtrustininstitutions #usingtechnologyinpolitics
AI in Politics: TalkBot Generates Answers, Sparks Controversy | US Newsper
usnewsper.com
To view or add a comment, sign in
1st year PhD student at RAI Lab, CS FSU | Previously AI Researcher at UNSW Sydney | Several industrial Algorithm Intern experience. Data Mining, GNN, Large Language Models, and more.
2wThanks for supporting and promoting Political-LLM😃