Ahmet Hamdi Güzel


Ph.D. Student @ University College London Computational Light Lab

MSc @ Artificial Intelligent University of Leeds

Hello, I am Artificial Intelligence MSc student at University of Leeds and an incoming PhD student at University College London Foundational AI starting September 2023. I have been working as master intern in the Computational Light Laboratory at University College London. I am also experienced in multi‑physics simulation engineer creating numerical methods to improve modeling of automotive propulsion systems including Formula 1 racing cars.

This is my AI related research/portfolio page. If you want to look at my previous engineering career (publications, patents etc.), please find me on LinkedIn. I am active member of Mensa UK where I meet with great friends.

Research Interest : Machine Learning & Computational Displays

I am passionate about exploring the intersections between machine learning and computational displays. My research focuses on developing new algorithms and techniques for enhancing the realism and interactivity of holographic displays using machine learning.

I believe holographic displays have the potential to revolutionize the way we interact with digital information, but creating high-quality, realistic holograms remains a challenge. By incorporating machine learning techniques, I aim to address these challenges and create holographic displays that are more realistic, interactive, and intuitive. My aim combining theories and techniques from machine learning and holography to create new and innovative solutions for holographic display. In particular, I am interesed in the use of generative models to generate and optimize holographic content. Additionally, I am exploring new ways to interact with holographic displays using machine learning techniques, such as computer vision and deep learning.


“The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Umtimate Wonderland into which Alice walked”

(Ivan E.Sutherland, The Ultimate Display, 1965)

With the power of Artificial Intelligence and advanced display technology, I am confident that we can unlock the door to a new world of infinite possibilities, where we can design and experience our dreams in real-time, like never before.


1- ChromaCorrect: Prescription Correction in Virtual Reality Headsets through Perceptual Guidance


Gradute Coursework/Projects Portfolio

University of Leeds - AI MSc - Course : Deep Learning

Project 1 - Image Classification and Grad-CAM Heat Map Generation

Through this coursework, I:

-Practiced building, evaluating, and finetuning a convolutional neural network on an image dataset from development to testing. -Gained a deeper understanding of feature maps and filters by visualizing some from a pre-trained network.

Pytorch is used to complete project. 3x64x64 resolution >10k image data is used in this project for multi-class classification.

I developed my own CNN architecture, optimisation method, and training parameter tunning for the best accuracy on both test and training case.

image image

Grad-CAM Generation

In this section, I explored using Gradient-weighted Class Activation Mapping (Grad-CAM) to generate coarse localization maps highlighting the important regions in the test images guiding the model’s prediction. I used pre-trained AlexNet for a single example image.


During the class, a private kaggle competition was opened, and my submission was winner with a notable margin score to the closest student score. [1st score is the test-case by teaching assistant]


Project 2 - Image Capture Generation [CNN + RNN]

Through this coursework, I: Understood the principles of text pre-processing and vocabulary building. Gained experience working with an image to text model. Used and compared two different text similarity metrics for evaluating an image to text model, and understood evaluation challenges.

I used the Flickr8k image caption dataset for image caption generation. The dataset consists of 8000 images, each of which has five different descriptions of the salient entities and activities.


During the project, I developed a CNN and RNN models to train a model aiming to generate image captures with the best accuracy.

Pytorch code lines are written from scratch [there was no boiler-plate code]

Optimisation model, hypertunning the parameters are performed. GPU training is completed in Google Colab-Pro.

BLEU Scoring


Selected Test Case for Model’s Prediction Performance


In this project the aim is captioning the image and according the 0.221 value, it can be concluded that it is not a good score since every caption finds the almost 22% meaningful overlap between test case captions. Even though 0.221 is a low number,human judge should be consider. BLEU is largely used in machine translation which compares translated sentence to original sentence. In this project, image captioning is the main aim, and comparing predicted caption with 5 different reference may fail the BLEU as a disadvantage. Before judging BLEU in terms of it’s prediction over meaningful sentence or just number of overlapped words, human level judgment should be considered.

University of Leeds - AI MSc - Course : Robotics / Reinforcement Learning

his projects explores the challenge of autonomous mobile robot navigation in complex environments. The objective of autonomous mobile robot navigation is to reach goal position and return back to original position, without colliding with obstacles. Between two objectives, there is another task which is robot should manipulate the object on the goal position, before returning the starting position of task one.

Instead of path planning and SLAM algorithms, deep reinforcement learning is applied in this project. There are two different reinforcement learning methods are applied during start position to goal and goal to start position. Both methods are designed with neural network architecture (deep reinforcement learning). 1st Method : Deep Reinforcement Learning 1 (Policy Gradient) for navigation 1

2nd Method : Deep Reinforcement Learning 2 (DQN) for navigation 2 In this project Gazebo is used as simulator, since it best supports the requirements for training the navigation robot agent. It provides physics engine and sensors simulation at faster than real world physical robot. OpenAI Robot Operating System (OpenAI ROS) is also used in this project. OpenAI ROS interfaces directly with Gazebo, without necessitating any changes to software in order to run in simulation as opposed to the physical world, and it provides wide range reinforcement Learning libraries that allow to train turtlebot on tasks. Since creating this simulation ecosystem for the project is time consuming, a virtual machine is provided by the instructor is used instead of creating from sctrach.Instead of VM ready world,a new world is created for navigation task.

Hyper parameter tunning and reward function design are studied,and results are compared in terms of total reward after each step.




University of Leeds - AI MSc - Course : Programming For Data Science

-California Housing Price Prediction

The aim of project analysing the data and creating a model which learns from the data to predict median house pricing for a given new data as input. Objective of this project can be addressed for imaginary estate agent evaluation company business model. I believe this is a very good use case, since housing price evaluation normally depends on nested calculations which create complex sceneries to specialists should solve. This is time consuming and not scalable. This model should reduce the errors in complex rules of estimation. For example, experts without this model can fail to predict right values due to number of parameters involved in their calculations. So, no estate agents wants to fail to set a descent price on houses for sale.




Achievements can be categorised under two parts are data analysis and model selection. Firstly, from raw housing data to learning model ready data process successfully completed without going into detail too much. Main reason for this selected methods to understand and analyse the data well served. Moreover, geographical data visualization is achieved w/o comprising jupyter notebook performance issue. This is completed by researching the high performance library for geographical visualization along with more than 20,000 markers. Furthermore, data pipe-lining successfully completed to prepare data for training. In this part both test and training data enters to pipeline process. Even though a function hasn’t been created, it would be good to create a function which allows scalability of data. In terms of model training, two different model successfully used from scikit-learn’s machine learning library. Their performance also compared, and one model is selected as best model to be used in testing data. Model’s drawbacks and results are compared in terms of root mean square error which is chosen as performance measure for models.


Regarding to my research for similar projects, this data set is not huge enough to build high accuracy models to predict median house values for any given attribute listed in data set. Moreover, housing median age and median house value data is restricted with selected value. This creates problems to fit the model. Removing them should improve training set but objective of this project will be harmed due to prediction capability of model is limited by median house value prediction and house information.

-Future Work

Possible future work would be using same process for data analysis and improving both models, or adding extra steps to data analysis section while trying more machine learning algorithms. After, other data set for different locations for example London can be gathered and worked to test model.

University of Leeds - AI MSc - Course : Data Science

-Fraud Detection / ML Algorithms Comparison Fraud detection is very important problem for banks. Even percentage of frauds over all time banking processes, it can be a big problem since fraud costs a lot of money for banks due to massive transactions in each day. A data scientist should build a very good fraud detection algorithm to overcome this issue.

In this project, two different machine learning algorithms will be used to find best model for fraud detection

Summary of Results; kNN Classifier & Decision Tree Classifier Comparison


Two techniques which are decision tree and k-nn binary classifier have been selected as machine learning models for fraud detection problem.

Initially, SMOTE over-sampling method is used to overcome class in-balance problem. This method is proved it’s power with oversample data without duplicating the target labels. Later, data split statically as test and train from whole data set by using 30% which is recommended in course notes from unit 2 classification part. In the first part of first technique, Decision tree binary classification algorithm was used with static data split. Later, hyperparameter which is n_neighbour is checked by passing a range of values. The similar approach is also used with cross validation method to overcome overfitting problem. Then performance metrics observed to choose the best hyperparameters this time by using cross validation with dynamic splitting. In the second part of study similar approach applied to k-NN binary classifier algorithm. For cross validatio nwith chosen the best parameters, Performance metric values are below;

AUROC –> Decision Tree : 99.79%, k-NN : 99.86%

Average Precision–> Decision Tree : 99.63%, k-NN : 99.86%

in terms of AUROC which is used as proper performance metrics for this study, k-NN is better than decision tree binary classification algorithm. Even its slightly better, for large tests and avoid accumulative costs from fraudulent transactions, k-NN should be chosen as a better option. One drawback of k-NN compared to decision tree algorithm, it takes more time to train due to its technique to handle training. Thanks to low k-number value as optimised hyperparameter, it will not be a big issue. Finally, due to oversampling randomness for each re-runnung the model, a few run has been checked and k-NN showed better results each time.

In static split method, it can be observable that AUROC and other performance metrics for decision tree algorithm is resulted perfect 100%. It clearly shows that there is an overfitting problem withhour cross validation technic.

University of Leeds - AI MSc - Course : Algorithms

Implementing A* Search Algorithm [C++]

In this project, A* graph-traversal and path search algorithm is implemented in C++. While this course does not provide any requisite for the project, I implement it in C++ to improve my C++ skills. So, this project is both my learning from Algorithms course as well as my Udacity C++ developer’s course conjunction. My implementation was different than Udacity’s solution, however there was no difference in the solution for both codes.


Figure below shows start and goal point along with the A* algorithms’ selected shortest path.


More than 180 lines of C++ code have written from scratch.

Carnegie Mellon University - Computer Graphics

This course offered by CMU, and their sources are (inc. assignments) available for public. During the course, I completed rasterization assignment in C++. In the-project, I implemented a software rasterizer that draws points, lines, triangles, and bitmap images. Also, a given SVG file is able to be rasterized by my implementation both in hardware and software type implementation.

Hardware Rendering

Initially, I implemented hardware-rendering with OpenGL API support.

image, image

Rasterization [Software Rendering]

Later, software implementation of rasterization is completed by writing C++ from scratch. The challenge was, instead of scanning all the pixels, a clever algorithm was asked to improve speed of rasterization. My approach was writing an algorithm to find framework of triangles and scanning only space in triangle framework. This improved the speed of software dramatically. Image below is to show triangle - framework capture algorithm’s visualisation.


Rasterization [Software Rendering] - Anti-Aliasing Using Super-sampling

In this task, I extended my rasterizer to anti-alias triangle edges via super-sampling. Super-sampling implementation is completed by using right memory management approach since cost of super-sampling is high. Results below shows 2 different sampling rate’s (1x, 4x) performance over my implementation.


C++ Developer Nanodegree Project - A* Search Route Planner


In this project, I have created a route planner that plots a path between two points on a map using real map data from the OpenStreeMap project. I02D Library for rendering was used to visualize the algorithm.

The project was written in C++ using real map data and A* search to find a path between two points,similar to mobile path planning application. OpenStreetMap project is used for map data. The OpenStreetMap project is an open-source, collaborative endeavor to create free, user-generated maps of every part of the world. These maps are similar to the maps you might use in Google Maps or the Apple Maps app on your phone, but they are completely generated by individuals who volunteer to perform ground surveys of their local environment.

The code was written by using OOP techniques and basic software design principals.

C++ Developer Nanodegree Project - Linux System Monitor


In this project, I developed system monitor by using advanced OOP techniques in C++. The developed program is light version of htop-system viewer application.

Linux OS keeps real-time operating system information by using file system. In this project, developed C++ application reads the files from the folders, collects the data and structures it, then data is processed and formatted for outputing to Linux terminal.

The project is using ncurses which is a library that facilitates text-based graphical output in the terminal.

C++ Developer Nanodegree Project - Chatbot Memory Management Project

The ChatBot code creates a dialogue where users can ask questions about some aspects of memory management in C++. After the knowledge base of the chatbot has been loaded from a text file, a knowledge graph representation is created in computer memory, where chatbot answers represent the graph nodes and user queries represent the graph edges. After a user query has been sent to the chatbot, the Levenshtein distance is used to identify the most probable answer. The code is fully functional as-is and uses raw pointers to represent the knowledge graph and interconnections between objects throughout the project.

In this project I analyzed and modified the program. Although the program can be executed and works as intended, I have added advanced concepts which are smart pointers, move semantics, ownership and memory allocation. image

C++ Developer Nanodegree Project - Concurrent Traffic Simulation

This is the project for the fourth course in the Udacity C++ Nanodegree Program: Concurrency. Throughout the Concurrency course, I developed a traffic simulation in which vehicles are moving along streets and are crossing intersections. However, with increasing traffic in the city, traffic lights are needed for road safety. Each intersection will therefore be equipped with a traffic light. In this project, I built a suitable and thread-safe communication protocol between vehicles and intersections to complete the simulation. I used my knowledge of concurrent programming (such as mutexes, locks and message queues) to implement the traffic lights and integrate them properly in the code base. image

#### C++ Developer Nanodegree Project - Snake Game with Diffuculty Level Setting

This C++ project is the capstone project (final) of the Udacity C++ Nanodegree. The source code has been mostly adapted from the provided starter code located at (Udacity’s repo)[https://github.com/udacity/CppND-Capstone-Snake-Game]. The code base can be divided architecturally and functionally into four distinct class-based components: Renderer component is responsible for rendering the state of the game using the popular SDL library Game component constructs and maintains the game board and placement of the game elements like the snake and food. Snake component constructs and maintains the snake object as it moves across the board gaining points and checking if it ran into itself. Controller component receives input from the user in order to control movement of the snake. Once the game starts and creates the Game, Controller, and Snake objects, the game continues to loop through each component as it grabs input from the user, Controller, updates the state of the Game, and graphically renderers the state of the game, Render.

image image

Personal Game Development Project - Twin Interaction eVTOL

I developed a game in Unity VS C# environment. Game interacts with hobby motors and communicate over serial communication port. Gamer’s control on aircraft speed reacts the real hardware and sensors on hardware de-rates the performance of aircraft. Idea was building a real hardware based physics engine for propulsion system. Game consist more than 700 code lines in C#.



Udacity Nanodegree - C++ Developer

Arizona State University - Deep Learning in Visual Computing Systems [Pytorch]

Duke University - Introduction to Machine Learning [Pytorch]

UIUC - OOP Data Structures in C++

Arizona State University - Data Structure and Algorithms

Arizona State University - Computer Organization and Assembly Language processing

Arizona State University - Operating Systems

Michigan State University - Game Development [Unity]