Artificial intelligence at the University of Tartu

1.2. What are the current limitations and drawbacks associated with AI usage?

The use of AI presents significant technical, ethical, and environmental challenges, such as high energy consumption, issues related to copyright and privacy, and data bias. All of these require a transparent and responsible approach to avoid negative societal impacts.

Although today’s AI applications are very powerful and capable of solving many tasks at a level comparable to or even better than humans, they still have various technical, societal, and ethical limitations and drawbacks.

  • Text-based AI models do not think about ideas and solutions like humans do; they simply predict the next word, which means they can make mistakes and provide incorrect information.
  • Using text-based AI models can improve student and employee performance and reduce the time spent on tasks, but using them to get work done does not support learning.
  • AI models are strongly influenced by the quality of the training data. Since the training data is currently collected in large volumes from the internet, it contains biases and prejudices related to factors like race, gender, socioeconomic status, cultural differences, political views, or other aspects. For example, ChatGPT tends to prefer candidates for jobs who do not mention disabilities in their resumes.
  • For AI to function, immense computational power is required, but the energy consumption of supercomputing centres is significant, making AI’s ecological footprint large.
  • Training AI models consumes a lot of electricity. For instance, training the ChatGPT-3 model generated approximately 502 tons of CO2 emissions, and each question posed to ChatGPT generates an average of 4.3 grams of CO2 emissions. AI data centres account for approximately 2.5–3.7% of global greenhouse gas emissions, surpassing even the impact of global aviation.
  • Cooling the computing centres required for creating and maintaining AI models consumes a large amount of clean water. For example, running 10-50 prompts on ChatGPT-3 takes approximately half a litre of water. If we add the indirect water consumption for manufacturing processors, microprocessors for data centres, and other components (it takes approximately 8 litres of water to manufacture a single microprocessor), AI’s environmental impact is significant.
  • Although the operational principles of AI models, such as deep neural networks and transformers, are clear, the exact process by which a model arrives at a specific result is akin to a “black box”, and even developers are not always able to provide clarity. This is problematic in fields that require extreme accuracy, such as healthcare, law, and finance, where it is crucial to understand the basis for decision-making.
  • One of the most complex issues surrounding AI applications, including text-based AI models, is their transparency or explainability. We do not know exactly which sources or criteria were used to collect the data used for training AI applications, or how the algorithms behind AI applications work. It is mentioned that, for example, OpenAI ChatGPT gets its base data from various internet sources, including e-books, social media, Wikipedia, news articles, scientific publications, forums, and many other places. At first glance, this selection seems good, but it cannot be denied that some authors, publishers, media outlets, etc., may be more reliable or relevant than others.
  • The sources used to train AI applications often contain information that is actually protected by copyright. The use of previously published information like this contradicts academic principles, which require proper citation of others’ work. You have probably heard of some court cases where authors have disputed the use of their creations in training AI applications – this is a serious concern among creators.
  • AI applications must also take security and privacy issues into account: various data protection principles apply to data management, and in order to confidently input data, specific measures need to be taken. A secure option is, for example, Microsoft Copilot, which can be used with university credentials, such as those from the University of Tartu. However, for more common AI applications developed on commercial principles, it is advisable to follow the principle mentioned by an interviewee student in Elika Kotsar’s master’s thesis: “Don’t share anything you wouldn’t want to have printed on your T-shirt and wear publicly.” When making decisions with the help of AI applications or using AI to create new content, significant societal impacts may arise, including issues of transparency and bias in text-based AI models, as well as copyright infringement problems, which affect political, cultural, economic, and social fields. To prevent this, AI developers are continuously working on improving data preprocessing and cleaning the data further.

Read more:

Self-assessment

To view third-party content, please accept cookies.
Accept Cookies