Research Engine

Arian
0

 Currently co-chaired by Microsoft and Telefonica, the Council is committed to strengthening technical capacities in ethics and AI, designing and implementing the Ethical Impact Assessment tool mandated by the Recommendation on the Ethics of AI, and contributing to the development of intelligent regional regulations. Through these efforts, it strives to create a competitive environment that benefits all stakeholders and promotes the responsible and ethical use of AI.

 If you choose to use generative AI tools for course assignments, academic work, or other forms of published writing, you should give special attention to how you acknowledge and cite the output of those tools in your work. You should always check with your instructor before using AI for coursework.

 As with all things related to AI, the norms and conventions for citing AI-generated content are likely to evolve over the next few years. For now, some of the major style guides have released preliminary guidelines. Individual publishers may have their own guidance on citing AI-generated content.

 Do cite or acknowledge the outputs of generative AI tools when you use them in your work. This includes direct quotations and paraphrasing, as well as using the tool for tasks like editing, translating, idea generation, and data processing.

 Be flexible in your approach to citing AI-generated content, because emerging guidelines will always lag behind the current state of technology, and the way that technology is applied. If you are unsure of how to cite something, include a note in your text that describes how you used a certain tool.

 When in doubt, remember that we cite sources for two primary purposes: first, to give credit to the author or creator; and second, to help others locate the sources you used in your research. Use these two concepts to help make decisions about using and citing AI-generated content.

 When you cite AI-generated content using APA style, you should treat that content as the output of an algorithm, with the author of the content being the company or organization that created the model. For example, when citing ChatGPT, the author would be OpenAI, the company that created ChatGPT.

 When referencing shorter passages of text, you can include that text directly in your paper. You might also include an appendix or link to an online supplement that includes the full text of long responses from a generative AI tool.

 Chicago style requires that you cite AI-generated content in your work by including either a note or a parenthetical citation, but advises you not to include that source in your bibliography or reference list. The reason given for this is that, because you cannot provide a link to the conversation or session with the AI tool, you should tread that content as you would a phone call or private conversation. However, AI tools are starting to introduce functionality that does allow a user to generate a sharable link to a chat conversation, so this guidance from the Chicago Manual of Style may change.

 The MLA views AI-generated content as a source with no author, so you'll use the title of the source in your in-text citations, and in your reference list. The title you choose should be a brief description of the AI-generated content, such as an abbreviated version of the prompt you used.

 Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.

 The use of AI in the publication process is intended to increase the speed of decision making during the review process and reduce the burden on editors, reviewers, and authors. The adoption of AI raises key ethical issues around accountability, responsibility, and transparency.

 Generative artificial intelligence (AI) tools are evolving incredibly quickly, and they are having a significant impact on education and research. This guide provides information about using generative AI in ethical, creative, and evaluative ways. It focuses on five key areas:

 This guide is licensed under CC BY-NC-SA 4.0, with the exception of the CLEAR Framework, which was used with permission of Leo S. Lo, and part of the "Evaluating AI Content" page, which was adapted with permission of the University of British Columbia Library.

 Territorial Acknowledgement The University of Alberta, its buildings, labs and research stations are primarily located on the territory of the Néhiyaw (Cree), Niitsitapi (Blackfoot), Métis, Nakoda (Stoney), Dene, Haudenosaunee (Iroquois) and Anishinaabe (Ojibway/Saulteaux), lands that are now known as part of Treaties 6, 7 and 8 and homeland of the Métis. The University of Alberta respects the sovereignty, lands, histories, languages, knowledge systems and cultures of all First Nations, Métis and Inuit nations.

 Authors are accountable for the originality, validity, and integrity of the content of their submissions. In choosing to use Generative AI tools, journal authors are expected to do so responsibly and in accordance with our journal editorial policies on authorship and principles of publishing ethics and book authors in accordance with our book publishing guidelines. This includes reviewing the outputs of any Generative AI tools and confirming content accuracy.

 Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research and validation, and is created by the author. Note that some journals may not allow use of Generative AI tools beyond language improvement, therefore authors are advised to consult with the editor of the journal prior to submission.

 Generative AI tools must not be listed as an author, because such tools are unable to assume responsibility for the submitted content or manage copyright and licensing agreements. Authorship requires taking accountability for content, consenting to publication via a publishing agreement, and giving contractual assurances about the integrity of the work, among other principles. These are uniquely human responsibilities that cannot be undertaken by Generative AI tools.

  Authors must clearly acknowledge within the article or book any use of Generative AI tools through a statement which includes: the full name of the tool used (with version number), how it was used, and the reason for use. For article submissions, this statement must be included in the Methods or Acknowledgments section. Book authors must disclose their intent to employ Generative AI tools at the earliest possible stage to their editorial contacts for approval – either at the proposal phase if known, or if necessary, during the manuscript writing phase. If approved, the book author must then include the statement in the preface or introduction of the book. This level of transparency ensures that editors can assess whether Generative AI tools have been used and whether they have been used responsibly. Taylor & Francis will retain its discretion over publication of the work, to ensure that integrity and guidelines have been upheld.

Free AI Tools

 If an author is intending to use an AI tool, they should ensure that the tool is appropriate and robust for their proposed use, and that the terms applicable to such tool provide sufficient safeguards and protections, for example around intellectual property rights, confidentiality and security.

 Taylor & Francis currently does not permit the use of Generative AI in the creation and manipulation of images and figures, or original research data for use in our publications. The term “images and figures” includes pictures, charts, data tables, medical imagery, snippets of images, computer code, and formulas. The term “manipulation” includes augmenting, concealing, moving, removing, or introducing a specific feature within an image or figure. For additional information on Taylor & Francis’ image policy for journals, please see Images and figures.

 Utilising Generative AI and AI-assisted technologies in any part of the research process should always be undertaken with human oversight and transparency. Research ethics guidelines are still being updated regarding current Generative AI technologies. Taylor & Francis will continue to update our editorial guidelines as the technology and research ethics guidelines evolve.

 Taylor & Francis strives for the highest standards of editorial integrity and transparency. Editors’ and peer reviewers’ use of manuscripts in Generative AI systems may pose a risk to confidentiality, proprietary rights and data, including personally identifiable information. Therefore, editors and peer reviewers must not upload files, images or information from unpublished manuscripts into Generative AI tools. Failure to comply with this policy may infringe upon the rightsholder’s intellectual property.

 Use of manuscripts in Generative AI systems may give rise to risks around confidentiality, infringement of proprietary rights and data, and other risks. Therefore, editors must not upload unpublished manuscripts, including any associated files, images or information into Generative AI tools.

 Editors should check with their Taylor & Francis contact prior to using any Generative AI tools, unless they have already been informed that the tool and proposed use of the tool is authorised. Journal Editors should refer to our Editor Resource page for more information on our code of conduct.

 Peer reviewers are chosen experts in their fields and should not be using Generative AI for analysis or to summarise submitted articles or portions thereof in the creation of their reviews. As such, peer reviewers must not upload unpublished manuscripts or project proposals, including any associated files, images or information, into Generative AI tools.

 These policies have been triggered by the rise of generative AI* and AI-assisted technologies, which are expected to increasingly be used by content creators. These policies aim to provide greater transparency and guidance to authors, reviewers, editors, readers and contributors. Elsevier will monitor this development and will adjust or refine policies when appropriate.

 Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work. Applying the technology should be done with human oversight and control and authors should carefully review and edit the result, because AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. The authors are ultimately responsible and accountable for the contents of the work.

 Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology.

 Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Each (co-) author is accountable for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original, that the stated authors qualify for authorship, and the work does not infringe third party rights, and should familiarize themselves with our Ethics in Publishing policy before they submit.

 We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.

 The only exception is if the use of AI or AI-assisted tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.

 The use of generative AI or AI-assisted tools in the production of artwork such as for graphical abstracts is not permitted. The use of generative AI in the production of cover art may in some cases be allowed, if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.

 When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.

 This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review report into an AI tool, even if it is just for the purpose of improving language and readability.

Post a Comment

0Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.
Post a Comment (0)

#buttons=(Accept !) #days=(30)

Our website uses cookies to enhance your experience. Learn More
Accept !
To Top