Skip to Main Content
Chat loading...

AI for Research

Ethical Considerations

Consider some of the following ethical considerations before using AI in your research:

Training

Generative AI is trained on lots and lots of data. Many of the LLMs commonly used are essentially trained on the entire internet. While that data is fed to the LLMs, it is unsupervised, and we don't know what connections it is making or how it is learning. Much of the internet contains incorrect, non-factual information that contains a lot of biased information, and LLMs learn from that, and reproduce it  in their outputs. The information that LLMs are trained upon is not neutral, and neither are LLMs.

Hallucinations

AI hallucinations are outputs that contain misleading or incorrect information. This information is "fluent but not factual" so it can sometimes be difficult to spot (Stokel-Walker et al., 2023). When LLMs are confronted with a lack of data on a specific subject, they create it. AIs have created many examples of completely fabricated data, including non-existent academic articles in bibliographies.

Intellectual Property

As previously mentioned, LLMs are essentially trained on the entire internet. However, much of the internet is copyrighted, and LLMs use and remix that copyrighted material without compensating or crediting the owner. This also means that LLMs can plagiarize, if the output is too closely related to the material that it was trained on.

Legally, it is still unclear who own the outputs from generative AI. The work you write as a student at Adler University belongs to you, however, the same cannot be said for the output of prompts you've generated.