Jump to content

User:Phlsph7/SourceVerificationAIAssistant

From Wikipedia, the free encyclopedia
User script
Source Verification AI Assistant
DescriptionScript to help editors verify whether a reliable source supports a claim.
Author(s)Phlsph7
First releasedJuly 25, 2023; 15 months ago (2023-07-25)
UpdatedJuly 25, 2023
    (15 months ago)
Browsersall modern browsers
Skinsall skins
SourceUser:Phlsph7/SourceVerificationAIAssistant.js

Source Verification AI Assistant is a user script to help editors verify whether a reliable source supports a claim. It attempts to find sentences in the source that support the claim and quotes them. The underlying AI technology is still in an early stage and often gives inaccurate responses. For this reason, it is the responsibility of the editor to ensure that the quoted sentences are found in the reliable source and that they support the claim. Editors should not blindly rely on the responses.

To display the responses directly, a so-called API key from OpenAI is required. Without an API key, the script can generate a prompt, which can then be used on other websites, like chat.openai.com or bard.google.com.

The script can be used on articles, drafts, and in the user space. It also works when previewing changes.

Installation

[edit]
After installation, Source Verification AI Assistant can be accessed via the toolbox on the right side.

To install this script, go to your common.js and add the following line:

importScript('User:Phlsph7/SourceVerificationAIAssistant.js'); // Backlink: [[User:Phlsph7/SourceVerificationAIAssistant.js]]

After the script is installed, it can be opened by going to the toolbox on the right and clicking on the link "Source Verification AI Assistant".

Once you have an OpenAI account, click on "Create new secret key" to generate your personal API key.
To get responses directly in the script, use the button "Set API key" on the bottom right.

The script can be used without an OpenAI key by copying the prompt and pasting it into the external website of an AI model, like chat.openai.com or bard.google.com. But to get responses directly from the script, a so-called API key from OpenAI is required. To obtain an API key, first create an OpenAI account.

  1. Go to https://platform.openai.com/signup and follow the process. It is free but a phone number is required for verification.
  2. Once you have an account, you can generate an API key. To do so, go to https://platform.openai.com/account/api-keys and click on "Create new secret key".
  3. The API key starts with "sk-....". Copy it to use it for the script.
  4. Start the script. On the right bottom, click on the button called "Set API key" and enter your API key. You can also use this button to change or remove your API key.

Please make sure that you do not share your API key with anyone. The script saves the API key locally on your device. It does not share it with Wikipedia or anyone else. It will only use it for your queries to the AI model.

After creating a new OpenAI account, users can use their API key for free for the first 3 months. The free account is limited by how many queries can be used per minute and in total. The script will display an error message if trying to access it after the limits of the free account are temporarily reached or if the 3 months have passed. Afterward, monthly payments are required to use the API key. The current cost afterward ranges from a fraction of a cent per query to up to 2 cents for queries with very long reliable sources. More details on free and paid accounts can be found at https://openai.com/pricing.

If you run into problems or have suggestions on how to improve the script, please discuss them at User_talk:Phlsph7/SourceVerificationAIAssistant.

Purpose and usage

[edit]

The purpose of this script is to help editors verify whether a claim in a Wikipedia article is supported by a reliable source. It looks for sentences in the reliable source that support the claim and quotes them. Please be very careful when interpreting its results since it may miss supporting sentences or quote sentences that do not actually support the claim.

To use the script select a sentence in a Wikipedia article that you want to verify.[a] Next, go to the toolbox and click on the link "Source Verification AI Assistant".[b] You need to paste the text of the reliable source into the field "Reliable source". It can come from a website or a pdf file and can also be entered manually.

  • If you have set an API key, you can use the button "Suggest supporting sentences". The response will be displayed in the field "Suggestions".
  • If you do not have an API key, you can click the button "Copy prompt" to copy the prompt to the clipboard. Next, you can go to an external website of an AI model, like chat.openai.com or bard.google.com. Paste the prompt into the message field and click on send.[c]

The script does not work if the text in the fields "Claim" or "Reliable source" is too long. You can automatically reduce the length of the reliable source text by clicking the button "Remove excessive source text" to remove all the text that comes after the length limit. Be careful since this may remove the passage that would support the claim. If you want to use the prompt on an external website, please keep in mind that they usually have a stricter limit on the length of the prompt and you may have to reduce the text of the reliable source further.

To get the best results, try to make the claim as short and straightforward as possible. For example, if it consists of a very long sentence with many different assertions, you may try to reformulate it into several short sentences that each makes only one assertion. You can then use the script several times: once for each short sentence. The script does not know the context of the claim. If it contains words like "she" or "at that time", it is unlikely to produce good results. So if "she" refers to Iris Murdoch then use that name instead of the pronoun.

The script works better for short reliable sources. If you think that a specific section of the reliable source supports the claim then you can try using only the text of that section instead of the whole text. It works best for English sources but can also be used for foreign language sources.

The script is optimized to be used with an OpenAI API key. The results these prompts produce on external websites may vary a lot and depend on the AI model that is being used.

Limitations and dangers

[edit]

The technology of the underlying AI model is very new and has many serious limitations. It was not designed to verify claims based on reliable sources and often produces false results. For example, it may state that the source does not support the claim even though it clearly does. And if it quotes supportive sentences, they may not fully support the claim and sometimes provide no support at all. Because of these reasons, reviewers should never rely on its results and always check its results for themselves. So if the script did not find any sentence, it only means that the reviewer has to read through the source themselves. If it quotes sentences then the reviewer has to ensure that (1) the sentences are found in the reliable source[d] and (2) they support the claim.

Another limitation is that the script is not aware of images or diagrams and may misinterpret text in tables. The script also does not check whether the text entered in the field "Reliable source" is actually from a reliable source.

More information on the limitations of AI models can be found at WP:LLM. This script should never be used to semi-automatically make changes to articles. A thorough and detailed human evaluation is always required.

Examples

[edit]

From: Quine–Putnam indispensability argument

(Please double-check the following information yourself and do not blindly trust it)
The sentences in the reliable source that support the claim are:

"From the rather remarkable but seemingly uncontroversial fact that mathematics is indispensable to science, some philosophers have drawn serious metaphysical conclusions."

"According to this line of argument, reference to (or quantification over) mathematical entities such as sets, numbers, functions and such is indispensable to our best scientific theories, and so we ought to be committed to the existence of these mathematical entities."

"In general, an indispensability argument is an argument that purports to establish the truth of some claim based on the indispensability of the claim in question for certain purposes (to be specified by the particular argument)."

"For example, if explanation is specified as the purpose, then we have an explanatory indispensability argument."

From: Quine–Putnam indispensability argument

  • Claim: For the logical positivists, all justified beliefs were reducible to sense data, including our knowledge of ordinary objects such as trees.
  • Source: https://iep.utm.edu/indimath/#SH2a
  • Suggestions:
(Please double-check the following information yourself and do not blindly trust it)
The sentences in the reliable source that support the claim are:

- "Positivism requires that any justified belief be constructed out of, and be reducible to, claims about observable phenomena."
- "We know about ordinary objects like trees because we have sense experience, or sense data, of trees directly."
- "For the positivists, any scientific claim must be reducible to sense data."
- "By rejecting positivism’s requirement for reductions of scientific claims to sense data, Quine allows for beliefs in mathematical objects despite their abstractness."

From: Intrapersonal communication#Types

(Please double-check the following information yourself and do not blindly trust it)
The sentences in the reliable source that support the claim are:

- "Levels of intrapersonal communication range along a continuum according to the extent messages are stored in the environment around the self communication system."
- "Such activities as 'thinking,' 'meditating,' and 'reflecting,' which may require no environmental storage outside the life space of the communicator, are on one end of this continuum and activities such as 'talking aloud to oneself' and 'writing oneself a note,' which require considerably more environmental storage, are on the other end of this continuum."
- "Intrapersonal communication might appear to a casual observer to be merely a truncated version of interpersonal communication, but a closer observation would reveal this is not the case."
- "Though indisputable similarities exist, there are certain characteristics of the intrapersonal network which differentiate it from other kinds of communication."
- "The ideas reflected in this section are those of Ruesch and Bateson [20]."

From: Elisabeth Anderson Sierra

(Please double-check the following information yourself and do not blindly trust it)
The sentences in the reliable source that support the claim are:

"The Aloha, Oregon, USA, native set the record for the largest donation of breastmilk by an individual by donating 1,599.68 litres (56,301.20 UK fl oz) to a milk bank between 20 February 2015 and 20 June 2018."

"This only accounts for milk that I donated to a milk bank between the years of 2015 and 2018," said Elisabeth.

"Over the past nine years, Elisabeth - mom to two daughters and a son - has donated to local families and recipients worldwide and estimates the total amount of breastmilk donated to be over 350,000 ounces."

From: Template:Did_you_know_nominations/$456,000_Squid_Game_In_Real_Life!

(Please double-check the following information yourself and do not blindly trust it)
The reliable source supports the claim that MrBeast's Squid Game recreation was described as "perverse" and a misunderstanding of the original. The following sentences from the reliable source provide support:

- "As a piece of media, it’s perverse. This doesn’t just badly misunderstand the anti-capitalist message of Squid Game, it’s a literal recreation of the villain's ultimate desire to watch desperate people compete for money purely for his amusement."
- "More than just bizarre, Mr. Beast’s Squid Game highlights a fundamental problem of YouTube."
- "Despite being original content from a popular content creator, it’s nothing more than a sad retread of someone else’s work."

From: Template:Did you know nominations/Saints Peter and Paul Seminary

(Please double-check the following information yourself and do not blindly trust it)
The sentences in the reliable source that support the claim are:

1. "The school near Newark is the only Roman Catholic high school seminary in Ohio and has been in Licking County since 1956."
2. "Since 1972, it has been the last high school operated in the United States by the Pontifical Institute for Foreign Missions, Mazur said."
3. "Finding priests willing to live at the seminary with the teen-agers and help monitor their activities has been difficult, Mazur said."
4. "The school could have tried to add more lay teachers to compensate for the priest shortage but that would have defeated its mission, said Ted Tomko, who has taught science and math since 1983."

See also

[edit]

Notes

[edit]
  1. ^ This step is optional. You can also use the script without selecting a sentence first.
  2. ^ You can have the toolbox floating on the side by clicking on its button "move to sidebar". This way, you do not have to scroll all the way up after selecting the sentence.
  3. ^ It is suggested that you start a new conversation for each query to ensure that the AI model does not quote sentences from earlier queries.
  4. ^ The script sometimes introduces slight changes to the quoted sentences, like changing the quotation mark style. This can make it more difficult to find the quoted sentences in the source text.