Successful integration of generative AI for civic engagement must be powered by people who use their judgment to validate outputs, mitigate potential errors, contextualize results, and build trust between the government and the community.
Cities globally have begun experimenting with using generative artificial intelligence (Gen AI) for civic engagement. Civic engagement is essential to a well-functioning government and involves the various interactions between the public and the city to share information and make decisions. Using Gen AI for civic engagement holds much potential, including the deployment of chatbots, language translation, synthesis of complex and technical documents, visioning and co-creation of design ideas, data visualization, and simulation and scenario planning. However, with these opportunities comes numerous concerns for the accuracy of results, algorithmic bias, private sector involvement, digital equity, and environmental costs. Our review of emerging projects combined with data from conversations with city officials illustrates the importance of human oversight and contextualization when working with Gen AI for civic engagement. Without this oversight, inaccurate results might be mistaken for fact, and the biases inherent in large language models (LLMs) might reinforce dominant culture narratives—the outcomes of which would surely erode public trust. Given these risks, we argue that the successful integration of Gen AI for civic engagement must be powered by people who use their judgment to validate outputs, mitigate potential errors, contextualize results, and build trust between the government and the community. This people-centered approach requires developing methods to involve communities in decisions about how AI tools should shape city–resident interactions and the design of guidelines for how Gen AI can be used responsibly and ethically for civic engagement.
Keywords: civic engagement; cities; technology adoption; generative AI, trust, transparency
Author Disclosure: The authors would like to thank the Norman B. Leventhal Center for Advanced Urbanism (LCAU) who helped support the development of the grant, including the researchers and students who helped with the development of the workshop and research needed for the writing of the grant, including Adi Kupershmidt, Rohit Priyadarshi Sanatani, Minwook Kang, Wil Jones, Hannah Nicole Shumway, and Addy Smith-Reiman. We would also like to thank the City of Boston staff and members of the public who shared the knowledge and experience with us. Through this partnership, we have been able to mix research with real world experiences.
This research set out to understand the “state of the field” related to Gen AI for civic engagement and make recommendations for its use; therefore, our work started with a review of Gen AI in popular and academic literature. Our team knew that emerging trends and concerns might not yet exist in the literature and therefore developed a series of workshops with the City of Boston to understand, explore possibilities, and learn about risks in using Gen AI for civic engagement. Attended by City of Boston personnel and partners, including staff members from community organizations and private residents, each workshop focused on the opportunities and constraints of Gen AI focusing on different user communities. In our first workshop, we sought to equalize knowledge about these systems by presenting the state of the field from our literature review. We used that as a jumping-off point to understand city officials’ interests, issues, and concerns, which flowed freely. In the second workshop, we sought to inspire creative applications of Gen AI for civic engagement, asking participants to think about how Gen AI might be used in their work with communities. At the third workshop, we focused on community residents, presenting them with the state of the field and inviting them to share understandings, perspectives, and concerns. Our workshops illustrated that creative ideas for using Gen AI are still emerging, even at our workshops, and it is clear that human oversight is necessary to build those strategies ethically and responsibly. This finding builds on existing literature about technology adoption in civic engagement, which argues that trust is central and can be built through sustained accurate interaction and collaboration with the technology, municipal employees, and the public. Ultimately, we find that human collaboration with Gen AI for civic engagement is essential for its ethical use.
Civic engagement is the process of creating alignment between city government and the community. It allows the city to share plans, ideas, and information. It also provides a way for constituents to share experiences, provide feedback, and request changes in process or outcome. At the heart of community engagement, there is a transfer of information, bridging information asymmetry between residents and city representatives. If the process works well, the government and the community exchange information and can modify each other’s perspectives and behavior over agreed information. Perhaps the most well-known conceptualization of civic participation was developed by Arnstein, who argues that lapses in public participation are due to a failure to bridge the inherent gap between those who hold power positions (e.g., decision-makers, politicians) and professional expertise (e.g., designers, planners), and those who do not.1 It has long been claimed that traditional participation methods (e.g., public hearings, review and comment processes, and procedures) have often failed to achieve genuine public participation in planning.2 For example, simply involving citizens through soliciting input alone does not challenge structures of power or the underlying values.3 The public has come to demand more active roles in the overall process of urban development not only as passive reflectors offering responsive input but as co-creators and decision-makers. This shift and pursuit of a more balanced power dynamic between professionals and the public is evident in participation practices that gravitate towards collaborative design.4
Local governments have widely adopted collaborative design processes to bridge participation gaps, and technological tools have played an important role in facilitating.5 According to DeSouza and Bhagwatwar, “[t]echnology enabled participatory platforms can be defined as forums created to source, analyze, visualize, and share information, expertise, and solutions to advance social causes and or solve social society problems.”6 Research has found that civic engagement technology platforms have allowed for greater contributions from citizens by allowing individuals to contribute and make suggestions directly to their government, which means an increased flow of information between the public and the government.7 However, early work has shown that these platforms can also create exclusion, including digital divide issues, which can reduce the effectiveness of civic engagement developed by these tools.8,9,10,11It is argued that these tools should be designed with the public to help lessen the barriers between the government and the public.12
Technology is an important mediator in the trust relationship. When a government introduces new technology, it negotiates two parallel interactions: one is the smoothness of a transaction (paying a parking ticket, applying for a permit), and the other is the alignment of values, which typically involves a demonstration of how the institution is making decisions or creating new services.13 Trust is connected to our understanding of the level of uncertainty in a situation and, therefore, the inherent risk. When applied to digital engagement tools, it is often hard to truly understand the risk if we only have partial information, which can create mistrust.14
Creating smooth transactions is relatively straightforward. Trust in digital tools can be created through repeated reliable transactions.15 If one is confident in a particular transaction, the sentiment can spread to additional transactions the system facilitates. For example, after several positive experiences buying books from Amazon, users are likely more willing to use the company’s additional and unrelated services such as payment tools, identity verification, etc. The same phenomenon applies to users of technologies in cities. Using a city’s website to pay parking tickets or to buy permits can build trust if things consistently go smoothly.16 Creating value alignment is far more difficult but arguably more essential for civic engagement. It necessitates transparency in what data is used, how it is analyzed, and how it informs decisions. Our literature review and workshops found that there is a general mistrust in the results of Gen AI, and trust must be built through human interaction with the tools to help explain and contextualize its outputs and mistakes. This human oversight is urgent in the case of Gen AI, as cities cannot simply roll out new tools without validating data sources and justifying the decisions that get made through them.
One of the most common use cases for Gen AI in civic engagement is conversational interfaces, which can simplify the information exchange between the public and the government. These largely manifest as chatbots. While chatbots built on traditional rule-based structured interactions have been around for quite some time,17 recent advances in Gen AI and specifically in LLMs have allowed such tools to move beyond preset question-and-answer protocols and towards more open-ended and human-like interactions with the public. These more recent applications have enhanced the exchange of information by augmenting it with nuanced contextual data, facilitating the exchange of information across languages (and back), and allowing for the possibility of multi-agent simulation.
For example, Gen AI–driven chatbots may use LLMs that are fine-tuned on use case–specific contextual data (e.g., site-specific zoning data, master planning processes, permitting rules, etc. ). These new tools have demonstrated the potential to serve as knowledge banks that can both record and disseminate information in human-like ways. The City of Helsinki, for example, has been leveraging the power of such models to provide information on urban services such as parking, sports and recreational facilities, and rental housing stock (https://ai.hel.fi/en/ai-register/). These platforms provide relevant responses based on rule-based discussion paths and the data that has been fed into them. Others, such as CitizenLab (www.citizenlab.co), deploy conversational agents for use cases such as participatory budgeting, collecting feedback on public proposals, and AI-powered summarization of community inputs. Researchers at the University of Helsinki have adopted a more multifaceted approach and launched Co Creation Radar, a set of AI-based tools aimed at helping city officials bridge the gap between citizens’ needs and the City of Helsinki’s resources. One of its services—CityTips—uses AI-prompted recommendations to convert citizens’ needs into actionable programs.
While the city of Helsinki has found Gen AI useful for enhanced and reliable customer service, this has not been the case for a chatbot released by New York City in 2023. The New York City chatbot focused specifically on helping small business owners access city services. Not five months after the release of the chatbot, controversies around its content started circulating. The bot was producing inaccurate, biased, and sometimes illegal suggestions. For example, the chatbot replied in the negative when asked whether landlords need to accept Section 8 vouchers. This is, in fact, not the case. It is illegal to discriminate based on income in New York City. While these mistakes have been addressed and disclaimers have been added to the interface, the criticism is growing in intensity, with many calling for the cities to develop more ethical standards for deploying Gen AI, including better human oversight. Perhaps more importantly, these high-profile missteps contributed to a general mistrust of using chatbots to deliver essential information to the public.
Concerns about the ability to trust the results of Gen AI–driven chatbots were also brought up in the workshop with Boston City officials. These included biases in datasets, inequity, and potential misinformation arising from inaccuracies in training data and the models themselves. One city official stated, “I think . . . the thing that always worries me about generative AI is that it’s doing something really complicated and making the complexity invisible and presenting a simple answer.” Here, the official worries about the nuances and intuition of human interaction, which might not be interpreted by the LLMs driving the chatbots. Another participant wondered if there was a way to check the accuracy of the answer, saying, “are there ways in which we can work to try to make it more reliable, like, review and make sure that you can have it check answers against each.” The official elaborated by saying he wondered if the Gen AI could check its own answers and, when there is a misalignment, send the request to human agents answering Boston’s non-emergency complaint and information hotline, 311.
Recent research has focused on developing methods for evaluating and interpreting the degree of confidence with which LLMs generate results. The approach has been to find ways to quantify the uncertainty within a model associated with any specific generated output.18 Such methods have been used to develop tools for detecting incorrect or speculative answers, thus making users aware of when to trust or not trust the results generated by these models.19 One area where people believe trust could be built is through communicating the numeric confidence score along with every LLM result, believing this would go a long way toward enhancing transparency and trust between users and Gen AI systems. However, other research has shown that while these indicators might be useful, human interpretation and observations about the results increase trust more than scores—again, we see the importance of human oversight in building trust in models.
Gen AI can help overcome language barriers by providing real-time translation services for civic engagement activities such as public meetings and online forums. This ensures that all community members can participate and contribute to the discussion regardless of their language proficiency. The City of Boston found this an opportunity to provide better engagement with their 311 system, a community complaint and information hotline that requires support in fourteen languages. One Chinese Boston resident participating in the workshop showed excitement, saying that “If 311 were able to have simple communication tools that allow them to input in Chinese, that would be great. When 311 gets Chinese, it will be more helpful for the resident.” This same resident indicated that one of the biggest barriers to engagement with the government is the ability to access information and communicate in their language. Here, we see the greatest potential for Gen AI to help increase how and with whom we engage.
However, the fact that most LLMs have been trained largely on English-language corpora raises concerns about equity, accessibility, and representation. Platforms such as CitiBot (www.citibot.io) attempt to address these issues by focusing on the creation of multilingual (71 languages) chatbots for trust-building between governments and citizens. Initiatives like Jugalbandi (www.jugalbandi.ai) combine LLMs with language translation models to make government programs and rights information accessible across India; the system works by translating user inputs from a source language for processing by the LLM and re-translating responses back into the user’s language. In related applications, LLMs facilitate the translation of materials into multiple languages and create accessible content for those with disabilities or those who do not speak the dominant language.20 In public health settings, LLMs equipped with real-time translation capabilities improve the clinical experience for patients with limited English proficiency, meeting legal requirements and enhancing outcomes.21 The ongoing development of speech recognition in LLMs22 and recent advancements in voice-based interactions by OpenAI show overall potential for broader and deeper public engagement in various settings.
Gen AI has been proposed as a useful mechanism to synthesize and analyze complex community data on local scales and communicate this synthesis back to the community in an accessible way.23 For example, Gen AI has been suggested as a summarization mechanism to help understand local and regional voter preferences between proposed policies and as an augmentation and explainability technique for voting systems.24 Such approaches require significant further investigation before they are safe to deploy; however, there are several applications of LLMs in the context of civil engagement that benefit residents of local communities. For example, LLMs have been deployed to summarize notes and recorded discussions from community meetings in order to make that data more accessible and interpretable to the public.25
Similarly, city governments are testing simple use cases for LLMs on their websites. In particular, they are using LLMs to summarize and tag content to make it more accessible for residents. In May 2024, the City of Boston’s emerging technology and digital teams found their first use case for Gen AI on a new webpage for Boston City Council roll call votes. They used Google’s Gemini 1.5 Pro to automatically generate descriptive titles for over 1,100 City Council vote records from the past 16 years. The City of Boston is also exploring using Gen AI to tag content across thousands of its web pages to improve the search functionality of boston.gov.
A recent review of popular methods for analyzing citizens’ inputs through Gen AI reveals that the methodological landscape on this front still relies on conventional natural language processing methods for traditional use cases such as sentiment analysis, topic modeling, or relation identification.26 While there is undeniable potential for using LLMs for data analysis and conversational insight generation, such approaches have only recently been gaining ground within this field. Moreover, most projects focusing on large-scale crowdsourced opinion mining/collection from citizens (independent of their analysis methods) do so either through the passive gathering of data or through traditional rule-based conversational approaches—Gen AI–facilitated interaction has yet to be developed. For example, while projects such as Pol.is (https://pol.is/home ) offer open-source systems for gathering and analyzing opinions, the data collection framework is limited to facilitating open-ended conversations between users on topics, and the analytical framework relies on unsupervised clustering for the identification of themes. Others, such as CitiCafe, use social media platforms such as Twitter as a platform for civic engagement;27 the conversation itself, however, is structured using rule-based prompts and responses, so it is not truly Gen AI.
Conversations with the City of Boston showed creative ideas about using Gen AI to make complex information more accessible. One city official mentioned the benefit of using Gen AI to quickly find answers in complex planning documents that might be “over 800 pages” within minutes and of providing the results to communities at meetings in real time. Another creative idea suggested deploying Gen AI to summarize and geotag historical data, currently in various analog formats, allowing them to make that information more actionable. On a related note, there was excitement for using vision recognition software and machine learning algorithms to help build hard-to-acquire civic data, such as detecting graffiti or public art, using Google Street View images (although not purely Gen AI). Gen AI could also streamline writing calls for artist proposals by assisting in phrasing and engaging the public in co-design processes.
LLMs also have the potential to be used for capturing the semantic content of unstructured 311 source request data, thus allowing analysts to interact with the system to derive nuanced insights. Here, the participants mentioned identifying potential issues around violence and escalating those calls more efficiently. There is also the potential to use such models for predictive analytics to help address community issues proactively before they escalate into significant problems. Gen AI can thus help identify and focus resources on the most pressing community needs, improving resource efficiency.
Visual Gen AI tools offer considerable potential for fostering civic engagement in urban design, a crucial aspect of participatory governance involving public input, critique, and collaboration in shaping the future urban landscape. By allowing users to imagine and articulate visions, AI-generated images provoke reflection and spark new ideas, enriching the design discourse. These tools address the inherent gap in design expertise between professionals and the public, enabling residents to intuitively express their urban visions and stimulate meaningful discourse. Text-to-image models like DALL-E, Gemini, and Midjourney empower community members to generate and manipulate images based on textual prompts, even when participants do not have design expertise.28 For instance, users can overlay Google Street View images with proposed design modifications, facilitating idea generation and modification for urban spaces. Noteworthy examples include the work of Zach Katz, a Brooklyn-based artist who visualizes car-dense streets in cities, including New York and Boston, as pedestrian and public transit utopias with DALL-E.29 Another notable project worked closely with community members to envision ideas for Puente Hills Landfill Park near Los Angeles, California, using the open-source Visual Gen AI tool –Dream Studio.30 Adobe Photoshop’s Text-to-Image feature empowers users to propose design alternatives by overlaying images with desired modifications such as alternative transportation lanes or green spaces.31 In addition to design visioning, AI-generated images are crucial in raising awareness of future environmental risks and advocating for sustainable policies. Platforms like FloodGen32 visualize climate hazards, empowering communities to take proactive measures and influencing policy decisions through compelling visualizations. It is important to note that while text-to-image software can be extremely helpful, many participants found the disparity between their expectations and what was generated enlightening, prompting them to think creatively about how to make these systems more accurate with better input training data.33 However, it is not only training data but also how users request image generation that makes a difference; the same request made using different words can produce wildly different results, illustrating the need for human contextualization of false outputs and improvements in our ability to engineer requests.
Gen AI visioning not only exists in the 2-D space but has also been applied to the development of 3-D models and virtual reality (VR). Recent advancements in AI technology also enable the generation of 3-D urban models, allowing users to simulate different urban fabrics and explore design implications for specific sites. The Urban-GAN computation system enables non-expert users to generate 3-D models for a defined intervention area by asking it to choose to imitate an existing city,34 allowing users to generate a “Dubai-like” or a “New York–like” 3-D urban model and learn the implications for the site in question. In VR applications, users can immerse themselves in proposed urban designs on a 1:1 scale. Users prompt the AI model to generate textures on a 3-D urban model (utilizing a Grasshopper Rhino PlugIn), and by integrating AI-generated textures into VR environments, participants can actively participate and iterate urban idea generation, enhancing community engagement.35 It should be noted that during workshops with the City of Boston, several participants pointed out the one-sided nature of public participation and recommended platforms enhanced by text–to–3-D AI models to enable increased collaboration with the public.
The two biggest needs regarding using visual Gen AI for collaborative design and participatory visioning are improved models representing diverse sociocultural contexts and the use of expert facilitators when deploying these tools in public participation processes. The lack of nondominant community representations in the model often produces results that do not represent the diversity of communities, which could have marginalizing effects on these community members. A recent test from our team using Midjourney in East Boston resulted in an image in which the color of people’s skin was changed from dark to light. This problematic result must be discussed with community members to help create alternative visions. One solution to these problems would be using Gen AI models trained with community-sourced images, creating increased contextualization of the place and its people. Visual Gen AI tools for participatory work need expert facilitators, as the workflows are often too technical for the average citizen.36 This type of facilitation is not new and has always been a part of collaborative design and visioning, allowing a conversation between the designer and the public; visual Gen AI just makes the process quicker and creates a way for non-designers to illustrate their vision using vast databases of examples that were previously hard for residents to draw upon.
Future ideas for visual Gen AI for civic engagement focused on integrating building codes, site data, contractor drawings, and artist impressions to help quickly create alternative solutions that meet standards.
LLMs can make data more accessible by allowing anyone to analyze and visualize data, playing an important role in creating government transparency essential to civic engagement.37 These tools allow anyone to visualize data available on municipal open data portals, making a map about everything from 311 complaints to building permits. However, greater access to data and algorithms increases the potential for inaccurate results due to potential biases in the data acquired or the algorithms applied.38 Issues such as inconsistent results from identical requests and unclear data sources underscore the need for more reliable and transparent methods.39 The accuracy of analyses largely depends on the quality of data available, necessitating robust data management systems.40 It is critical to manage data effectively to maintain trust in these advanced technologies, and data governance has been suggested as one solution. There is also a risk that users may misunderstand the data or overly rely on the tool’s outputs, potentially impacting their decision-making.41
Examples of Gen AI models that provide easier access to data analysis include GeoGPT, a variant of OpenAI’s GPT transformer model, and LLM-Geo, both of which have been trying to integrate geographic information systems (GISs) with LLMs to make it simpler for nonexperts to access complex geographical data and instantly make maps of their data. For instance, GeoGPT simplifies data collection, processing, and visualization through simple language commands.42 Similarly, GeoQAMap employs an LLM to transform regular questions into SPARQL (i.e., SPARQL protocol and RDF query language) queries, enabling users to navigate complex databases without needing expertise in specific programming languages.43 This application of LLMs in geospatial data analysis and visualization also extends into the private sector. For example, Aino.world (www.aino.world) recently introduced web services that support prompt-based data queries and map visualizations. Similarly, the widely recognized GIS software ArcGIS (www.arcgis.com) is actively updating its user community on how it incorporates LLMs into its platform to assist users with queries, analysis, and the visualization of geospatial data.44 These types of operations are best applied to expert data scientists who understand how to validate their analysis and the limitations of the algorithms that drive them.
While these tools provide an increased ability to access and visualize openly available data, they are better used in partnership with people with expert knowledge about data accuracy, completeness, and the effects of search terms on the results. A great example of this issue was demonstrated during our workshops, when Gen AI was used to visualize 311 complaints about Boston’s potholes. The participants were impressed when a map of the complaints appeared instantly on the screen; however, further inspection from the 311 experts in the room showed that the maps were incomplete because they only visualized complaints from the previous month. Another example from the workshop illustrated how poor or inaccurate search terms affect visualization results. Participants attempted to develop a map of rat complaints and understandability using the term “rat” to identify the data, but they found no results. However, when the term “rodent” was deployed, a map of complaints across the city appeared. Expert data scientists are trained to look for these types of inconsistencies in how data is stored (“rat” vs. “rodent”); however, data novices might not check for or be aware of these potential differences. It is not only how data is searched and stored that makes a difference in the accuracy of results but also how users ask Gen AI to develop the code itself. For example, data science experts know the underlying mechanism behind the code that creates maps; therefore, they ask Gen AI to perform a task in a language it more precisely understands, producing higher accuracy rates. It is well established that Gen AI tools perform better when the searches are highly engineered, and this has led to a new field—Gen AI search engineering. Examples from the workshops make clear that specialized knowledge increases the potential accuracy of Gen AI for data analytics and that expert oversight during civic engagement can help verify results and create a sense of trust within the community.
It is clear that while the system can perform simple tasks autonomously, more complex spatial analyses often require human oversight, especially for tasks that demand detailed decision-making.45 To continue advancing the adoption of LLM-powered GIS tools, ongoing development and rigorous testing are essential. Enhancing the ability of LLMs to process complex spatial requests and implementing systems to verify these results are crucial next steps; such improvements will not only build user trust but also broaden the application of LLMs in community planning and decision-making and further welcome broader citizen engagement.
There are serious concerns in the literature about using private sector algorithms within the public sector,46,47 echoing concerns raised about Gen AI during our workshops. Most of the current literature that discusses this issue focuses on general-purpose AI algorithms.48 These issues included questions on the bias of Gen AI training data, accuracy of the results, algorithmic biases, transparency and discrimination, legality, dependence on the private sector for modeling (increased cost, lock-in, etc.), how the private sector uses data (search queries or questions) inputted by the public,49 and the input of city data, which needs privacy protection into private modeling tools. Zuiderwijk et al. recommend that governments create focused strategies for dealing with these issues arising from different forms of AI through unique data governance protocols for each typology (e.g., vision recognition, transportation planning, and environmental modeling).50 Data governance refers to the decisions made about data collection, analysis, storage, and use (i.e., Where does the control and management of the data reside and with whom?). It is clear that developing a data governance structure for Gen AI applications in cities would go a long way to increasing trust in their use. However, cities struggle with developing these data structures due to a lack of funds, political will, and in-house expertise to develop these detailed plans.
The ability to trust the algorithms and data analysis provided by private sector companies is essential to civic engagement, and this lack of trust will be an obstacle to the widespread use of Gen AI for civic engagement. Uncertainty around the possible negative effects of how private sector models are trained and, more specifically, the biases associated with that data featured prominently as topics of concern in the workshops. For example, one participant asked, “Should we be using these private sector tools and modes because we don't have control over the training data?” Beyond the question of bias in training data used by private sector Gen AI tools, clarity around how user input data is stored and used featured prominently, with concerns for possible exposure of community-embedded training data. For example, one participant wondered how using users’ search data might cause harm, asking, “Could people be targeted for having gone a certain way to get to a certain answer, or what happens to the questions they asked on these platforms?” Another participant commented, “They say that they’re not incorporating the data into the search of the model, but they’re not telling us what they’re doing with your search.” Such themes echo concerns being widely discussed in research about the use of AI in public governance.51,52,53 Government officials are worried about uploading sensitive data to these private sector tools, as they are unsure how private companies might use the data. Further research on this topic should be conducted to determine whether cities could expose the public to harm.
During the workshops, questions about the accuracy and privacy of information led to a consideration of the role of government in creating regulations for the use of Gen AI. Furthermore, participants wondered whether using private sector tools amounted to endorsing the tool, and therefore break with cities ethics codes around favoritism to specific companies. The issue of copyright infringement was mentioned several times, as City of Boston government staff have worked to minimize the risk of Gen AI to local governments with a focus on regulatory and legal challenges. Discussions touched on the challenges created by existing regulations that do not account for Gen AI and modern digital technologies, thereby emphasizing the need for updated policies that also need to cover use by entities outside the purview of existing government regulation. Related to this issue, there is a movement to encourage governments to create more governance structures around the use of AI for decision-making along the lines of ones already developed by private companies.54 These are exactly the types of issues that would be addressed in Gen AI data governance document, and it is clear that these types of documents could be extremely useful.
In response to issues around the lack of transparency of commercial models and questions of bias, privacy, accuracy, and transparency, participants suggested that the LLMs could be designed for the government or more collaboratively with the public (possibly in collaboration with academics), and the model training data included would already be in the public domain, thus ensuring transparency and enhancing trust. There is a movement to build these types of open-source Gen AI models for governments. A recent article by Booz Allen Hamilton noted that using these open-source models for government allows for greater customization, portability, and model transparency, therefore also allowing for risk mitigation, the inclusion of a diversity of knowledge bases, and compliance with government security protocols. However, these benefits have trade-offs including higher development costs, greater technical capacity, competition with proprietary models, investment in skill sets, and how much to customize.55
While hosting and maintaining their own local, open-source LLMs56 may be a way off, governments are already fine-tuning commercially available LLMs with retrieval-augmented generation,57 which gives models external sources of information. For example, the City of Colorado Springs recently launched a new AI-based chatbot called AskCOS in partnership with Citibot.58 Developers used pages and documents from their city’s website as the chatbot’s sole source of information. City governments may find retrieval-augmented generation attractive since it can help them cite reliable sources and reduce the production of inaccurate results. Looking forward, open-source models such as BLOOM, which is particularly adept at multilingual support,59 might fit government applications for civic engagement particularly well, as our research found that being able to provide information in multiple languages is one of the biggest challenges for civic engagement and opportunities for Gen AI. The potential for using open-source Gen AI and developing applications specifically for civic engagement might be extremely useful for helping to address issues of accuracy and privacy that arise when using private sector tools.
Importantly, workshop participants discussed how city officials might need to teach their constituents how to use and understand Gen AI tools for civic engagement. This brought up questions about the digital divide, tech/data literacy, and the challenges for the government in achieving digital equity. The term “digital equity” has included different meanings over time. However, it has become more important to municipal governments because of two recent bills, the Digital Equity Act of 2019 and the Accessible, Affordable Internet for All Act of 2020. These bills define digital equity as “the condition in which individuals and communities have the information technology capacity needed for full participation in the society and economy of the United States.”60 The legislation caused the development of municipal digital equity plans across the United States, which focused on barriers to technology in their cities and sought to address issues around the digital divide, a term that describes the disparity in access to digital technology by different socioeconomic groups.61 A recent study of digital equity plans in Kansas City, MO, Portland, OR, San Francisco, CA, and Seattle, WA, showed that cities often focus on improving access to technology by providing funds to encourage better acquisition of Wi-Fi and computers as well as education programs in the use of digital tools.62 Education about using Gen AI will need to be incorporated into municipal digital equity plans, especially if it is meant to be used for civic engagement.
While research has focused on Gen AI and digital equity as it relates to labor63 and schools,64 there is little research on bridging the digital divide related to Gen AI for civic engagement. When we look at the research illustrating that a digital divide does exist for Gen AI, it shows that it is used more often by people of higher education levels and socioeconomic status, as well as people in urban communities.65 Historically, the City of Boston’s digital equity efforts have focused on device access and internet connectivity. Through its Digital Equity Fund, the City of Boston supports community organizations that provide digital skills training. Currently, this training focuses on the basics of internet access, such as email and web browsing, and useful online tools such as Google Maps and Google Docs. These training organizations have yet to include training for Gen AI; however, there is clearly an emerging need. Therefore, governments must develop digital equity plans that include educational programs that make Gen AI more accessible to the public in general. Perhaps more importantly, these programs must provide guidance that helps the public understand how to interpret and critique results for Gen AI. These programs will not only be useful for civic engagement but also in addressing the effects of Gen AI in schools and the labor market.
As the participants weighed the pros and cons of Gen AI in the civic engagement space, the conversations led to acknowledging environmental costs. AI systems in general, and GenAI systems in particular, are known to come with significant energy, carbon, and water costs.66,67,68 One suggestion for addressing this issue has been the development of more energy-efficient hardware and algorithms; while work has started, there is much to be done.69 Others have suggested that we need to develop an evaluative framework for understanding the cost and benefits, including the impact on society and economics, to balance the environmental implications versus the environmental impacts.70 However, developing such evaluative frameworks necessitates access to private Gen AI developers’ data, which are difficult to obtain. Others have suggested that the use of AI systems should be paused until we identify these risks, among others.71 The literature shows that addressing this is essential for the planet’s health.
Understanding the environmental cost and benefits of using Gen AI was a concern of the participants of the workshops, given that many cities strive for net zero carbon emissions—how might using Gen AI affect that accounting? One participant mentioned, “If you were to think about the cost of electricity and natural resources and water, would that change your impression of these tools and their effectiveness?” This same participant mentioned the need to balance the trade-offs, saying, “Environmental cost is one of the things that concerns me where there’s a trade off, and there’s a balance between . . . what’s useful.” These questions are important not only for using Gen AI for civic engagement but also for its general application in government. Identifying the true environmental cost is essential for making informed decisions about the impact (cost and benefits) of Gen AI socially and economically.
Our work shows that Gen AI can enable new mechanisms for community engagement, allowing residents to transform the information provided by the city in a way that is more relevant or actionable through improved language translation, more accessible summaries of lengthy technical documents, visioning the design of their communities, and search, to name a few improvements. However, our work also shows that these new methods have numerous risks—uncertain accuracy, completeness, bias, privacy concerns, and dependence on private sector tools. These risks can further misalign the community and the city government by creating mistrust, which is antithetical to a functioning government. Given these risks, we argue that the successful integration of Gen AI for civic engagement must be powered by people who use their judgment to validate outputs, mitigate potential errors, contextualize results, and build trust between the government and the community. This people-centered approach requires developing methods to involve communities in decisions about how AI tools should shape city–resident interactions and the design of guidelines for how Gen AI can be used responsibly and ethically for civic engagement. It is well known that trust is built through sustained, accurate, and transparent human interaction; therefore, future research must interrogate issues that cause mistrust in Gen AI. At the same time, we must also develop governance structures, such as guidelines for the use of Gen AI in civic engagement, which outline the risks and opportunities of Gen AI to increase transparency in the risks and opportunities of this new technology.
An important step towards creating responsible governance structures around the use of Gen AI and civic engagement should start with the immediate development of guidelines for how municipal employees should ethically and responsibly use Gen AI for civic engagement processes. While many cities already have guidelines for how municipal workers can apply Gen AI to aid in their day-to-day work, there are no specific guidelines for using it in civic engagement. Effective guidelines must prioritize transparency in the risks associated with these tools, ensuring both city officials and the public understand the capabilities and limitations. Our research shows that these guidelines must include the following critical components:
Transparency on potential harm: Civic engagement strategies employing Gen AI must clearly describe the potential risks associated with the underlying models. These risks include biases within the data, reliance on proprietary algorithms from the private sector, environmental impacts, and the potential for misinformation dissemination. There should be a focus on weighing the potential harm with the potential benefits.
Identification of use cases and associated biases: Guidelines should outline potential use cases of Gen AI in civic engagement, such as chatbots, visual interpretation, translation tools, and synthesis of meeting minutes and text, while also addressing the biases inherent in these applications. City officials should be encouraged to highlight inaccurate results and use those results as an opportunity to discuss the ways biases present themselves in public discourse. City officials can use these discussions to start a conversation about alternative strategies that focus on local community values that are not represented.
Encouragement of creative applications: City guidelines should encourage innovative approaches to leveraging Gen AI in civic engagement beyond conventional case studies outlined here. Creative solutions should seek to show how other forms of knowledge and community-centered local knowledge can be included in models while also protecting the community’s privacy.
Teaching the community to use Gen AI: It is clear that there is a digital divide among those who use Gen AI tools. Therefore, it is important to create a plan to educate the public on how best to ethically and responsibly use Gen AI to close that knowledge gap, including knowledge about search engineering and updating training data.
Formulating specialized guidelines for the ethical use of Gen AI in civic engagement is a pivotal step toward fostering transparency, accountability, and public trust. By addressing potential risks, promoting innovative applications, and leveraging existing frameworks, cities can navigate the complexities of AI integration while advancing inclusive and participatory governance.
Our work has identified many challenges and opportunities for using Gen AI in civic engagement. At the heart of our finding was the need to increase trust; therefore, we outline several areas of future research for Gen AI that might help us build that trust.
Locating biases in LLM training data and developing methods to address them, including methods for incorporating more diverse forms of knowledge and sociocultural perspectives.
Tracing how private sector companies use civic engagement data input (both model training data such as a legal document and user search data) and identifying the privacy and data security risks these tools present for city residents and employees. For example, could we find the results of a legal document in answer to Chat GPT?
Investigating the feasibility (and potential benefits) of open-source generative AI models for municipal use. This would include analysis of the economic and technical feasibility of building such tools by governments. In a related question is their better open-source Gen AI tools that governments could use? For example the open source Gen AI BLOOM is good for language, we should evaluate it potential bias and implementation.
Identifying which approaches for participatory governance are best suited for promoting community involvement in the development, adoption, use, and regulation of Gen AI tools as well as broader values such as transparency and government accountability.
Evaluating the reliability and effectiveness of Gen AI-generated outputs for municipal services, including in areas such as translation and information dissemination. If we have a better understanding of how accurate the information is, we can better communicate the possibilities of Gen AI to build trust.
Developing an approach for assessing the environmental impact of Gen AI use by the city for civic engagement. This includes obtaining the electricity and water usage data from private companies to allow for a more accurate estimate of the true environmental cost. It is well documented that the data needed to make these environmental estimates are hard to acquire. Research might also attempt to make data proxies to help build better quantitative analysis around this issue. This research also could involve developing a plan for acquiring the needed data, such as creating policy strategies that might encourage sharing this data.
Discovering the ways generative AI use may change the essential dynamics of civic engagement and the places where human labor can help preserve the most important aspects of these exchanges.
Our research highlighted several factors contributing to the mistrust of Gen AI in civic engagement, such as inaccurate results and unknown training data. Further research should identify other factors contributing to the mistrust of Gen AI tools in the context of civic engagement. Perhaps more importantly, this research should be used to develop and test methods to increase transparency, accountability, and trust in how members of the city and public members use these tools. For example, could the models themselves provide a level of understand of their accuracy?
Arnstein, Sherry R. “A Ladder Of Citizen Participation.” Journal of the American Institute of Planners 35, no. 4 (July 1969): 216–24. https://doi.org/10.1080/01944366908977225.
Asthana, Sumit, Sagih Hilleli, Pengcheng He, and Aaron Halfaker. “Summaries, Highlights, and Action Items: Design, Implementation and Evaluation of an LLM-Powered Meeting Recap System.” arXiv, July 28, 2023. https://doi.org/10.48550/arXiv.2307.15793.
Atreja, Shubham, Pooja Aggarwal, Prateeti Mohapatra, Amol Dumrewal, Anwesh Basu, and Gargi B. Dasgupta. “Citicafe: An Interactive Interface for Citizen Engagement.” In Proceedings of the 23rd International Conference on Intelligent User Interfaces, 617–28. IUI ‘18. New York, NY, USA: Association for Computing Machinery, 2018. https://doi.org/10.1145/3172944.3172955.
Barocas, Solon, and Andrew D. Selbst. “Big Data’s Disparate Impact.” California Law Review 104 (2016): 671.
Bashir, Noman, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti. “The Climate and Sustainability Implications of Generative AI,” 2024. https://mit-genai.pubpub.org/pub/8ulgrckc.
von Brackel-Schmidt, Constantin, Emir Kučević, Stephan Leible, Dejan Simic, Gian-Luca Gücük, and Felix N. Schmidt. “Equipping Participation Formats with Generative AI: A Case Study Predicting the Future of a Metropolitan City in the Year 2040.” In HCI in Business, Government and Organizations, edited by Fiona Fui-Hoon Nah and Keng Leng Siau, 270–85. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-61315-9_19.
Booz Allen Hamilton. “The Case for Open-Source Generative AI in Government.” Accessed June 2, 2024. https://www.boozallen.com/insights/ai/the-case-for-open-source-generative-ai-in-government.html.
Broomfield, Heather, and Lisa Reutter. “In Search of the Citizen in the Datafication of Public Administration.” Big Data & Society 9, no. 1 (January 2022): 205395172210893.
Campbell, William Ross. “The Sources of Institutional Trust in East and West Germany: Civic Culture or Economic Performance?” German Politics 13, no. 3 (September 2004): 401–18. https://doi.org/10.1080/0964400042000287437.
Carpini, Michael X., Fay Lomax Cook, and Lawrence R. Jacobs. “PUBLIC DELIBERATION, DISCURSIVE PARTICIPATION, AND CITIZEN ENGAGEMENT: A Review of the Empirical Literature.”.” Annual Review of Political Science 7, no. 1 (May 17, 2004): 315–44. https://doi.org/10.1146/annurev.polisci.7.121003.091630.
Chen, Jiuhai, and Jonas Mueller. “Quantifying Uncertainty in Answers from Any Language Model and Enhancing Their Trustworthiness,” 2023. https://doi.org/10.48550/ARXIV.2308.16175.
Corbett, Eric, and Christopher A. Le Dantec. “Exploring Trust in Digital Civics.” In Proceedings of the 2018 Designing Interactive Systems Conference, 9–20. Hong Kong China: ACM, 2018. https://doi.org/10.1145/3196709.3196715.
Corritore, Cynthia L., Susan Wiedenbeck, and Beverly Kracher. “The Elements of Online Trust.” In CHI ’01 Extended Abstracts on Human Factors in Computing Systems, 504–5. Seattle, Washington: ACM, 2001. https://doi.org/10.1145/634067.634355.
Crawford, Kate. “Generative AI’s environmental costs are soaring - and mostly secret.” Nature 626, no. 8000 (Feb 2024): 693–693. https://doi.org/10.1038/d41586-024-00478-x.
Daepp, Madeleine I. G., and Scott Counts. “The Emerging AI Divide in the United States.” arXiv, April 30, 2024. https://doi.org/10.48550/arXiv.2404.11988.
De Sousa, Weslei Gomes, Elis Regina Pereira De Melo, Paulo Henrique De Souza Bermejo, Araújo Sousa Farias Rafael, and Adalmir Oliveira Gomes. “How and Where Is Artificial Intelligence in the Public Sector Going? A Literature Review and Research Agenda.” Government Information Quarterly 36, no. 4 (October 2019): 101392. https://doi.org/10.1016/j.giq.2019.07.004.
Desouza, Kevin C., and Akshay Bhagwatwar. “Technology-Enabled Participatory Platforms for Civic Engagement: The Case of US Cities.” In Urban Informatics, 25–50. Routledge, 2017. https://www.taylorfrancis.com/chapters/edit/10.4324/9781315652283-3/technology-enabled-participatory-platforms-civic-engagement-case-cities-kevin-desouza-akshay-bhagwatwar.
D’ignazio, Catherine, and Lauren F. Klein. Data Feminism. MIT Press, 2023.
DiMaggio, Paul, Eszter Hargittai, Coral Celeste, and Steven Shafer. “Digital Inequality: From Unequal Access to Differentiated Use.” In Social Inequality, edited by Kathryn Neckerman, 355–400. New York: Russell Sage Foundation, 2004.
Edinger, Julia. “Colorado Springs Chatbot Is Powered by AI, City Data.” Government Technology, May 24, 2024. https://www.govtech.com/artificial-intelligence/colorado-springs-chatbot-is-powered-by-ai-city-data.
Egger, F., M. Helander, H. Khalid, and N. Tham. “Affective Design of E-Commerce User Interfaces: How to Maximise Perceived Trustworthiness,” 2001. https://www.semanticscholar.org/paper/Affective-design-of-E-commerce-user-interfaces%3A-how-Egger-Helander/aa320c098c0557986250a903624b0980dbfcc40a.
Epstein, Dave, Taesung Park, Richard Zhang, Eli Shechtman, and Alexei A. Efros. “BlobGAN: Spatially Disentangled Scene Representations.” In Computer Vision – ECCV 2022, edited by Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, 616–35. Cham: Springer Nature Switzerland, 2022. https://doi.org/10.1007/978-3-031-19784-0_36.
Epstein, Ziv, Hope Schroeder, and Dava Newman. “When Happy Accidents Spark Creativity: Bringing Collaborative Speculation to Life with Generative AI,” arXiv, June 1, 2022. https://doi.org/10.48550/arXiv.2206.00533.
Esposito, Mark, and Terence Tse. “Mitigating the Risks of Generative AI in Government through Algorithmic Governance.” SSRN Scholarly Paper. Rochester, NY, January 14, 2024. https://doi.org/10.2139/ssrn.4811924.
ESRI. “ArcGIS: Product & Technology Vision.” Esri Videos: GIS, Events, ArcGIS Products & Industries, March 12, 2024. https://mediaspace.esri.com/media/t/1_c1ftfyju.
Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2018.
Fathullah, Yassir, Chunyang Wu, Egor Lakomkin, Junteng Jia, Yuan Shangguan, Ke Li, Jinxi Guo, et al. “Prompting Large Language Models with Speech Recognition Abilities.” In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 13351–55, 2024. https://doi.org/10.1109/ICASSP48485.2024.10447605.
Fatima, Samar, Kevin C. Desouza, and Gregory S. Dawson. “National Strategic Artificial Intelligence Plans: A Multi-Dimensional Analysis.” Economic Analysis and Policy 67 (September 2020): 178–94. https://doi.org/10.1016/j.eap.2020.07.008.
Feng, Yu, Linfang Ding, and Guohui Xiao. “GeoQAMap - Geographic Question Answering with Maps Leveraging LLM and Open Knowledge Base (Short Paper).” In DROPS-IDN/v2/Document/10.4230/LIPIcs.GIScience.2023.28. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. https://doi.org/10.4230/LIPIcs.GIScience.2023.28.
Fitzpatrick, Alex. “Meet the Artist Using AI to Imagine Better Cities,” August 3, 2022, https://www.axios.com/2022/08/03/artist-ai-dall-e-urban-design.
Future of Life Institute. “Pause Giant AI Experiments: An Open Letter,” March 22, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
George, A. Shaji, A.S. Hovan George, and A. S. Gabrio Martin. “The Environmental Impact of AI: A Case Study of Water Consumption by Chat GPT,” April 20, 2023. https://doi.org/10.5281/ZENODO.7855594.
Giri, Deepak, and Erin Brady. “A Democratic Platform for Engaging with Disabled Community in Generative AI Development.” arXiv, September 26, 2023. http://arxiv.org/abs/2309.14921.
Gordon, Eric. “With Trust in Government Waning, Can New Technologies Make It Easier to Govern?” Knight Foundation, December 5, 2022. https://knightfoundation.org/articles/with-trust-in-government-waning-can-new-technologies-make-it-easier-to-govern/.
Gudiño-Rosero, Jairo, Umberto Grandi, and César A. Hidalgo. “Large Language Models (LLMs) as Agents for Augmented Democracy.” arXiv, May 7, 2024. https://doi.org/10.48550/arXiv.2405.03452.
Guridi, Jose A., Cristobal Cheyre, Maria Goula, Duarte Santo, Lee Humphreys, Aishwarya Shankar, and Achilleas Souras. “Image Generative AI to Design Public Spaces: A Reflection of How AI Could Improve Co-Design of Public Parks.” Digital Government: Research and Practice (April 10, 2024): 3656588. https://doi.org/10.1145/3656588.
Hakhverdian, Armen, and Quinton Mayne. “Institutional Trust, Education, and Corruption: A Micro-Macro Interactive Approach.” Journal of Politics 74, no. 3 (July 2012): 739–50. https://doi.org/10.1017/S0022381612000412.
Hargittai, Eszter, and Amanda Hinnant. “Digital Inequality: Differences in Young Adults’ Use of the Internet.” Communication Research 35, no. 5 (October 2008): 602–21. https://doi.org/10.1177/0093650208321782.
Henman, Paul. “Improving Public Services Using Artificial Intelligence: Possibilities, Pitfalls, Governance.” Asia Pacific Journal of Public Administration 42, no. 4 (October 1, 2020): 209–21. https://doi.org/10.1080/23276665.2020.1816188.
Henneborn, L. “Designing Generative AI to Work for People with Disabilities.” Harvard Business Review, August 18, 2023. https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities.
Hilger, Peter. “Civic Engagement of Marginalised Groups: Educational Aspects and Public Representation.” Citeseer, 2004. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=f57c377d2852f283f1ef3b6a14f881fb9888a137.
Hsu, Yen-Chia, Ting-Hao ‘Kenneth’ Huang, Himanshu Verma, Andrea Mauri, Illah Nourbakhsh, and Alessandro Bozzon. “Empowering local communities using artificial intelligence.” Patterns 3, no. 3 (March 11, 2022): 100449. https://doi.org/10.1016/j.patter.2022.100449.
Innes, Judith E., and David E. Booher. “Reframing Public Participation: Strategies for the 21st Century.” Planning Theory & Practice 5, no. 4 (December 2004): 419–36. https://doi.org/10.1080/1464935042000293170.
Janssen, Marijn, Paul Brous, Elsa Estevez, Luis S. Barbosa, and Tomasz Janowski. “Data Governance: Organizing Data for Trustworthy Artificial Intelligence.” Government Information Quarterly 37, no. 3 (July 2020): 101493. https://doi.org/10.1016/j.giq.2020.101493.
Kasneci, Enkelejda, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, et al. “ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education.” Learning and Individual Differences 103 (April 1, 2023): 102274. https://doi.org/10.1016/j.lindif.2023.102274.
Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy, Artificial intelligence, economy and society, 44, no. 6 (July 1, 2020): 101976. https://doi.org/10.1016/j.telpol.2020.101976.
Laskar, Md Tahmid Rahman, Xue-Yong Fu, Cheng Chen, and Shashi Bhushan TN. “Building Real-World Meeting Summarization Systems Using Large Language Models: A Practical Perspective,” 2023. https://doi.org/10.48550/ARXIV.2310.19233.
Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Yih Wen-tau, and Tim Rocktäschel. “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.” Advances in Neural Information Processing Systems 33 (2020): 9459–74.
Li, Zhenlong, and Huan Ning. “Autonomous GIS: The next-Generation AI-Powered GIS.” International Journal of Digital Earth 16, no. 2 (December 8, 2023): 4668–86. https://doi.org/10.1080/17538947.2023.2278895.
Littlefield, Jamie. “Re-Imagining Urban Spaces with Dall-E AI,” August 2022. https://www.wordsbuildcities.com/projects/dalle.
Louie, Ashley. “BetaNYC Presents FloodGen, a Advocacy Tool That Uses Generative AI Imagery - BetaNYC,” May 3, 2024. https://beta.nyc/2024/05/03/betanyc-presents-floodgen-a-advocacy-tool-that-uses-generative-ai-imagery/.
Luccioni, Alexandra Sasha, Yacine Jernite, and Emma Strubell. “Power Hungry Processing: Watts Driving the Cost of AI Deployment?” arXiv, May 23, 2024. https://doi.org/10.48550/ARXIV.2311.16863.
Marusich, Laura R., Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. “Using AI Uncertainty Quantification to Improve Human Decision-Making.” arXiv, February 6, 2024. https://doi.org/10.48550/arXiv.2309.10852.
Merilehto, Juhani. “On Generative Artificial Intelligence: Open-Source Is the Way,” SocArXiv, March 13, 2024. https://doi.org/10.31235/osf.io/jnmzg.
Quan, Steven Jige. “Urban-GAN: An Artificial Intelligence-Aided Computation System for Plural Urban Design.” Environment and Planning. B, Urban Analytics and City Science 49, no. 9 (November 2022): 2500–15. https://doi.org/10.1177/23998083221100550.
Reynante, Brandon, Steven P. Dow, and Narges Mahyar. “A Framework for Open Civic Design: Integrating Public Participation, Crowdsourcing, and Design Thinking.” Digital Government: Research and Practice 2, no. 4 (October 31, 2021): 1–22. https://doi.org/10.1145/3487607.
Riegelsberger, Jens, M. Angela Sasse, and John D. McCarthy. Trust in Mediated Interactions. Vol. 1., edited by Adam N. Joinson, Katelyn Y. A. McKenna, Tom Postmes, and Ulf-Dietrich Reips. Oxford University Press, 2012. https://doi.org/10.1093/oxfordhb/9780199561803.013.0005.
Romberg, Julia, and Tobias Escher. “Making Sense of Citizens’ Input through Artificial Intelligence: A Review of Methods for Computational Text Analysis to Support the Evaluation of Contributions in Public Participation.” Digital Government: Research and Practice 5, no. 1 (March 31, 2024): 1–30. https://doi.org/10.1145/3603254.
Stelzle, Benjamin, Anja Jannack, and Jörg Rainer Noennig. “Co-Design and Co-Decision: Decision Making on Collaborative Design Platforms.” Procedia Computer Science 112 (2017): 2435–44. https://doi.org/10.1016/j.procs.2017.08.095.
Stephanidis, Constantine, Margherita Antona, Stavroula Ntoa, and Gavriel Salvendy, eds. HCI International 2023 Posters: 25th International Conference on Human-Computer Interaction, HCII 2023, Copenhagen, Denmark, July 23–28, 2023, Proceedings, Part V. Vol. 1836. Communications in Computer and Information Science. Cham: Springer Nature Switzerland, 2023. https://doi.org/10.1007/978-3-031-36004-6.
Stratton, Caroline. “Planning to Maintain the Status Quo? A Comparative Study of Digital Equity Plans of Four Large US Cities.” Journal of Community Informatics 17 (December 1, 2021): 46–71. https://doi.org/10.15353/joci.v17i.3576.
Tack, Achim. “Adobe Photoshop’s Generative Fill Feature for Innovative Urban Design and Public Engagement,” May 29, 2023. https://www.achim-tack.org/blog/2023/5/29/xu022x85j8asdx50ikomi5yo416mbe.
Torggler, Michael. 2008. “The Functionality And Usage Of CRM Systems.” Zenodo, May. https://doi.org/10.5281/ZENODO.1056320.
UrbanistAI. “UrbanistAI in Action: Participatory Workshops to Design Helsinki’s Summer Streets,” June 20, 2023. Medium. https://medium.com/urbanistai/urbanistai-in-action-participatory-workshops-to-design-helsinkis-summer-streets-b14b733c54c8.
Veale, Michael, and Irina Brass. “Administration by Algorithm? Public Management Meets Public Sector Machine Learning.” SSRN Scholarly Paper. Rochester, NY, 2019. https://papers.ssrn.com/abstract=3375391. doi:10.31235/osf.io/mwhnb.
Williams, Sarah. Data Action: Using Data for Public Good. Cambridge, Massachusetts: MIT Press, 2022.
Yan, Eugene. “A List of Open LLMs Available for Commercial Use.” GitHub, June 5, 2024. https://github.com/eugeneyan/open-llms.
Yang, Joshua C., Damian Dailisan, Marcin Korecki, Carina I. Hausladen, and Dirk Helbing. “LLM Voting: Human Choices and AI Collective Decision Making.” arXiv, May 15, 2024. https://doi.org/10.48550/arXiv.2402.01766.
Yang, Qing, and Wenfang Tang. “Exploring the Sources of Institutional Trust in China: Culture, Mobilization, or Performance?” Asian Politics & Policy 2, no. 3 (July 2010): 415–36. https://doi.org/10.1111/j.1943-0787.2010.01201.x.
Ye, Fanghua, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, and Zhaopeng Tu. “Benchmarking LLMs via Uncertainty Quantification,” arXiv, April 25, 2024. https://doi.org/10.48550/ARXIV.2401.12794.
Zarifhonarvar, Ali. “Economics of ChatGPT: A Labor Market View on the Occupational Impact of Artificial Intelligence.” SSRN Scholarly Paper. Rochester, NY, February 7, 2023. https://doi.org/10.2139/ssrn.4350925.
Zhang, Yifan, Cheng Wei, Shangyou Wu, Zhengting He, and Wenhao Yu. “GeoGPT: Understanding and Processing Geospatial Tasks through An Autonomous GPT.” arXiv, July 15, 2023. https://doi.org/10.48550/arXiv.2307.07930.
Zuiderwijk, Anneke, Yu-Che Chen, and Fadi Salem. “Implications of the Use of Artificial Intelligence in Public Governance: A Systematic Literature Review and a Research Agenda.” Government Information Quarterly 38, no. 3 (July 1, 2021): 101577. https://doi.org/10.1016/j.giq.2021.101577.