Some articles that caught our attention and that we think are worth sharing.
AI will soon be able to cover public meetings. But should it?
AI may provide a solution to cover meetings that resource-starved news and civic organizations might miss. And it’s important that we develop guidelines with a strong ethical foundation.
Government of Canada Privacy Guidance Checklist
A practical checklist from the Government of Canada of privacy measures to observe in planning, designing, reviewing, launching, and updating a site
Where do we go from X?
Neville Hobson looks at the implications of more fragmented social networks in the wake of Elon Musk’s dismantling of Twitter.
Disney, The New York Times and CNN are among a dozen major media companies blocking access to ChatGPT as they wage a cold war on A.I.
Major news outlets are taking steps to prevent AI applications from scraping their sites to train the AIs’ large language models (LLMs)
Since the release of ChatGPT in late 2022, Artificial Intelligence (AI) has exploded in public awareness and everyday use. AI has been introduced in common office productivity apps, email, image generation, search, and online commerce.
But as quickly as the technology is advancing, many words of caution are being raised about establishing norms and guidelines for its use. Technology companies, governments, civil society and users of the technology all have a role in defining the benefits we want to receive and how best to contain the risks.
Public Participation practitioners have begun to experiment with AI tools. And as we use AI, we can see both the promise it holds to enhance meaningful public involvement as well as the risks that it may pose.
76engage is actively exploring the application of artificial intelligence to online public engagement. But as we do this, we are prioritizing the safety of participants and the security of the data you collect, especially participants’ personally identifiable information (PII). We see great potential for AI. But we also understand the risks of doing it wrong. Risks to privacy. Risks to security. Other risks that we may not yet understand?
So, as we start on our AI journey, we have set as a basic policy that AI will be opt-in. Unless you explicitly request that we turn on AI features, they will be turned off. This will give you the control to determine when AI can benefit you and that it fits into your corporate policies.
We also are applying measures to restrict AI from scraping your data from 76engage. Recently ChatGPT published its GPTbot. At the same time they provided a code snippet which we have applied to robots.txt to tell the GPTbot not to scrape content off of a 76engage site. We will look for other opportunities to control access by AI large language models to your content.
As we said, we are experimenting and exploring the possibilities the potential for AI applied in the online space. We have installed some experimental modules and are developing features that use AI to improve the participant experience as well as reporting and analysing their input.
However, to guard your safety and security, we are doing this exclusively on our Lean 76engage site. In this way, we can restrict the AI features to be used only on content that we create for ourselves knowing that it is experimental content and could be shared with the AI models.
We’re very much looking forward to developing these new features. But when we offer them, we want you to be assured that they are both safe and secure for you to use.
So that is our approach, rapid innovation with the safety of your participants in the security of your data as a central goal.