| Form of studies |
Bachelor |
| Title of the study programm |
Computer Systems |
| Title in original language |
Lielu valodas modeļu veiktspējas novērtēšana eksāmenos un testos |
| Title in English |
Evaluating the Performance of Large Language Models in Exams and Tests |
| Department |
Faculty Of Computer Science Information Tehnology And Energy |
| Scientific advisor |
Alla Anohina-Naumeca |
| Reviewer |
Vadims Zīlnieks |
| Abstract |
There are many tests of the capabilities of large language models (LLMs), mainly various automated tests. Each of them has its own advantages and limitations.
The primary objective of the thesis is the evaluation of LLMs performance on complex reasoning tasks in education.
The thesis will examine the testing of models such as GPT-4o, Claude Opus 4, Gemini 2.5 Pro, DeepSeek V3, and LLaMA 4 Maverick. The testing will be based on real test assignments from the artificial intelligence course.
The tasks being tested include state-space and graph search, optimization and logical planning, game tree construction and decision-making, as well as basic machine learning and semantic structure interpretation.
The results show that the models achieve better performance in the case of prepared answer options, while they performed worse on tasks involving uninformed and heuristic search algorithms on a given graph.
The bachelor thesis consists of 54 pages; it contains 1 figures, 1 formula nine tables, four appendices, 61 references. |
| Keywords |
Lielie Valodu Modeļi (LLM), MI izglītībā, Automatizēta novērtēšana, Eksāmenu izvērtēšana, GPT-4o, Claude Opus 4, Gemini 2.5 Pro, DeepSeek V3, LLaMA 4 Maverick, Promptu inženierija, Cilvēka-MI saskaņa. |
| Keywords in English |
Large Language Models (LLMs), AI in Education, Automated Assessment, Exam Evaluation, GPT-4o, Claude Opus 4, Gemini 2.5 Pro, DeepSeek V3, LLaMA 4 Maverick, Prompt Engineering, Human-AI Concordance. |
| Language |
eng |
| Year |
2025 |
| Date and time of uploading |
02.09.2025 23:50:08 |