Author: Mustafa Ansari, OMS II PCOM GA
Editors: Pranitha Pothuri and Mahi Basra
Updated: December 16, 2024
Artificial Intelligence (AI) is revolutionizing various fields including healthcare. It has the ability to process large amounts of data in a short time and can identify patterns, improve accuracy, and enhance efficiency. Despite the many benefits of AI, it can also present challenges like ethical concerns, job displacement, and data bias.
AI can rapidly analyze data with a high level of accuracy. This process aids in a more accurate data analysis, as it avoids human error. It can go through millions of data points, identify patterns, and provide insight that would take a human months to complete. AI can interpret data, and be programmed to analyze and reduce any errors by applying the same methodology to all data collection. This can be a useful tool in genetics when looking for markers and predicting patient outcomes. Humans can work well with AI in research, using it as a tool for data collection. AI can streamline the more routine aspects of research, allowing humans to focus on the more challenging aspects of research.
AI can aid a researcher in gaining a deeper understanding of a complex simulation within minutes. It can create predictive models within minutes that were previously not possible. It can aid in predicting trends and potential impacts of change, which can provide valuable insight to a researcher.
Although there are many benefits to using AI in research, there are also some challenges and pitfalls. One of the most significant challenges is data bias. AI is designed to be trained and can implement tasks accordingly. If the training data presents bias or does not represent research as a whole, the results from AI will also be biased. If AI is used solely for the research, this can lead to skewed results and reinforce existing biases. Ethical concerns can also be a pitfall, as there can be issues regarding privacy, consent, and the mismanagement of data. For example, processing sensitive personal data without proper safeguards could potentially violate privacy laws– for example, the General Data Protection Regulation (GDPR).
The GDPR, is an all-encompassing European Union (EU) legislation in data protection. Although it was designed to protect EU residents’ data privacy, it has an extended impact on American businesses. Companies that have an establishment in the EU, offer goods and services to EU residents, or monitor the behavior of EU residents are required to follow GDPR norms as it gives data protection rights to all citizens of the EU. Using AI with no substantial security measures in place to keep sensitive information secure could easily lead to violations of GDPR, which will impose legal and financial consequences.
Regulatory issues aside, AI poses ethical issues in research around plagiarism. For instance, AI has the capability to fabricate images and text without acknowledging the source it borrowed it from. This raises questions on authorship, originality, and intellectual property in cases where AI was a direct copy of the copyrighted content. Moreover, the same applies in an academic and professional context– resulting in allegations of plagiarism that will hamper the entire research process. In order to mitigate this, AI should be supervised to ensure proper citations and validated authenticity.
In a broader view, using AI for content creation is helpful in regards to time and effort spent. However, it simplistically streamlines the options, by ignoring subtle details that often lead to a shallow interpretation of essential information. This is problematic for areas such as medicine and law where detailed insights and context exist. Errors when relying on sources that utilize AI in this context lead to dire consequences. As such, despite AI being a reasonable research instrument, its muse must be regulated in such a way that high moral standards, creativity, and substances are not lost.
When you ask AI a question, it will give you a direct answer. There is a lack of transparency as understanding the reasoning behind the answer is a crucial part of research. In this process, without transparency, the risk of AI-generated research may leave many gaps in understanding. As AI becomes more popular and easy to use, researchers will naturally become more reliant on this tool. This leaves risk for loss of critical thinking, and years of understanding and expertise from a researcher. Although AI can provide the material and data, a researcher’s judgment and creativity will be lost. This can also lead to job displacement.
AI may pose a financial burden, as it will require resources to access large databases and computing infrastructures. This can be problematic for research being conducted in third-world countries and for smaller institutions in the United States. This burden can add a wider gap between well-funded and under-resourced research. The maintenance of AI may require an ongoing investment, increasing the cost of the research project.
The role of AI in research has both benefits and pitfalls. It can transform research across various fields, enhance data analysis, accelerate the process, and create complex simulations and predictive models. The pitfalls of AI include data bias, lack of transparency, ethical concerns, job displacement, and long-term high costs. AI can be used in research under the proper guidance. It can be a powerful tool that can aid in data collection, but researchers must still interpret the data to understand and analyze complex innovations and discoveries.
Reference
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8516568/
Chubb J, Cowling P, Reed D. Speeding up to keep up: exploring the use of AI in the research process. AI Soc. 2022;37(4):1439-1457. doi: 10.1007/s00146-021-01259-0. Epub 2021 Oct 15. PMID: 34667374; PMCID: PMC8516568.