Research Projects

Example Research Projects

The table below contains projects submitted by academic staff members. Please note these are just examples and other projects within the remit of the CDT themes are welcome.

Project TitleThemeDescription
Using AI methods to improve understanding of the causes of land use changeAI for the Natural EnvironmentHuman transformation of land (land use change; LUC) has had fundamental impacts on ecosystems globally. As such, understanding drivers of land use change, and predicting future trajectories of land use change is a critical research challenge. However, understanding which drivers are relevant in which situations is difficult, due to the complexity of drivers and the highly context dependent nature of land use change. The recent explosion in machine learning (ML) and deep learning (DL) approaches may have important applications for LUC modelling, particularly because of the increasing number of high resolution multi-year ‘data cubes’ of land cover data that are now available. ML/DL is already used for creating land cover from remote sensing, and for short term prediction of land use change. However, its utility for improving understanding the causes of land use change, or effectiveness of policies to change it, remains poorly understood. The goal of this PhD is to build preliminary unpublished work led by Felix Eigenbrod and try out and refine promising AI methods for increasing our understanding of the causes of land use change. It is anticipated that causal ML and DL methods will be particularly relevant for this work.
2D materials-based memristor system for in-memory sensing and computing hardware Sustainable AIToday’s AI hardware technology has high carbon and silicon footprints [1]. Integrating the three main components in computing architecture (sensors, memories, and processors) in a circuit/system requires a lot of peripheral electronics. For example, sensors receive analogue signals, yet memories and processors work digitally; thus, an analogue-to-digital converter (ADC) component needs to be integrated with each sensor to make the computation happen. Moreover, the data transfer between these three main components is power-hungry and has a high time delay [2]. Now, imagine if sensing, memorising, and computing were done in a single component; this could significantly reduce the power consumption and computation time and simplify the computing architecture, making the AI ecosystem more sustainable [3].

This PhD project aims to invent a new component that can sense, memorise, and compute in a single architecture utilising atomically thin materials and memristor technologies. The student will have the opportunity to work in an interdisciplinary environment, developing skills in 2D materials, memristor technologies, and electronics design. The student will learn how to design and fabricate a massive array of memristor prototypes and integrate them to build a silicon neural network. The final objective of this project is to build an application demonstrator hardware to compute vector-matrix multiplication using analogue stimuli. The student will access our nanofabrication centre, which includes a state-of-the-art cleanroom, materials and electrical characterisation facilities with excellent research staff and technicians to support their research. The student will be encouraged to attend international conferences to present their research work and trained in publishing their results in high-impact journals.

The student will be supervised by Dr Yasir Noori (materials processing), Dr Firman Simanjuntak (device fabrication), and Prof. Mark Zwolinski (circuits/system design) and will be part of the Sustainable Electronics Technologies (SET) research group, one of the leading research groups in the UK, offering unique solutions to real-world problems by delivering efficient electronics while addressing all aspects of sustainability. We are seeking exceptional candidates to join our team and interested in devoting their passion to addressing some of the challenges we have identified.

[1] Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).
[2] Indiveri, G. & Liu, S. C. Memory and Information Processing in Neuromorphic Systems. Proc. IEEE 103, 1379–1397 (2015).
[3] Prinzie, J., Simanjuntak, F. M., Leroux, P. & Prodromakis, T. Low-power electronic technologies for harsh radiation environments. Nat. Electron. 4, 243–253 (2021).
Realizing Green Electrification in Transportation with AI SupportAI for Transportation and LogisticsRealizing green electrification is important to achieve Net Carbon Zero in the transportation sector. A wealth volumn of literature has witnessed the fast growth of this electric vehicle market with, however, a lack of focus on commerical vehicles. Heavy-Duty-Vehicle for example, that is often used in logistic of road transportation, is now facing emrging needs of electrification. Notably, the electrification of such commerical vehicles is different than what has been heavily studied in the customer market, where such managerial decision would rely on the business scenarios.


To this end, AI can be used to help better understand, for example, the change of logistic demand, and its impact on the decision of electrification. This project therefore aims to understand the potential incentives and/or barriers for the electricfication of commerical vehicles with AI support. In particular, AI will be used as part of the Smart-Predict-then-Optimize (SPO) approach to incorportate the unknown future into optimization. Game theory would then be used to construct different scenarios to deepen the understanding of market change and/or with the presence of competition and coorporation. Based on the results obtained, optimal subsidy and/or other policies/external incentives could be studied to accelerate such electricfication and contribute directly to reducing greenhouse gas emission.
AI to understand the ocean floorAI for the Natural EnvironmentRationale: The activities of a diverse array of sediment-dwelling fauna are known to mediate carbon remineralisation,
biogeochemical cycling and other important properties of marine ecosystems, but the contributions that different
seabed communities make to the global inventory have not been fully established. Decades of ecological research have
mainly relied on point observations in time or space that tend to be geographically constrained, yet ecological processes
occur over multiple spatial and temporal scales and are based on a diversity of habitat types and environmental
settings. Consequently, most models are parameterised with broad functional descriptors or selected values of species
contribution that oversimplify or misrepresent temporal and spatial variation in the mediating role of biota. The limiting
step to date has been the inability to detect community dynamics at the scale at which they happen, as this requires
high resolution techniques, such as photogrammetry, that can be subsequently upscaled to larger scales using machine
learning. Releasing the constraint of scale dependence presents a new opportunity to interrogate ecosystems at the
temporal and spatial scales at which they operate and removes the need to compartmentalize the environment into
arbitrarily defined habitats or units of time. In doing so, estimates of faunal mediation will be more relevant and
accurate, and will help improve global predictions of biogeochemical cycles.
Sustainable Semiconductor Manufacturing via AI Optimization AlgorithmsAI for Sustainable Operations and Circular EconomyFabricating semiconductor devices, such as processors, graphics or memory units, is one of the most complicated and costly industrial manufacturing processes. Each one of the advanced processor chips that we carry in our phones undergoes thousands of manufacturing steps at fabrication plants (Fabs), where each step can involve tens of control parameters that must be finely tuned to lead to the desired results. This makes the research, development, and testing cycles of semiconductors extremely costly, time-consuming and labour-intensive.

This PhD project aims to make semiconductor fabrication smarter, more economical and sustainable by combining the exceptional fabrication facilities we have at Southampton with our strong AI expertise to work on this nascent research area.

The first stage of this PhD project will require the experimental fabrication of nanoscale structures using e-beam lithography, plasma etching, and material deposition tools. The second stage will require collecting data from characterising these nanostructures using microscopic and spectroscopic tools and feeding it to AI/Machine learning algorithms. The algorithms will then be tested in the third stage in real scenarios to assess their capability to predict and mitigate defects in fabrication and optimise the fabrication process.

You will have access to one of the most advanced university fabrication cleanrooms in the world and a large set of characterisation labs. You will be trained to perform experimental work, write scientific publications and present your results to industrial partners and at major conference events to the academic community. You will work within the Sustainable Electronic Technologies group which hosts leading academics and a vibrant team of PhD students and postdoctoral researchers.

The supervisory team, Dr Yasir Noori (ECS), Dr Ben Mills (ORC) and Dr Firman Simanjuntak (ECS), have combined an extensive set of expertise in semiconductor fabrication, artificial intelligence for manufacturing, and characterisation of devices and systems.

After completing your PhD you will have a unique skill set that will put you at the forefront of future semiconductor fabrication, making you well positioned to have a fruitful career in academia or industry.

References:
O. Buchnev, et al. “Deep-Learning-Assisted Focused Ion Beam Nanofabrication” Nano Letters, 22, 2734-2739, 2022.
M. Nandipati, et al. “Bridging Nanomanufacturing and Artificial Intelligence – A Comprehensive Review” Materials, 17, 1621, 2024.
G. Tello, et al. “Deep Structures Machine Learning Model for the Recognition of Mixed-Defect Patterns in Semiconductor Fabrication Processes”, IEEE Transactions on Semiconductor Manufacturing, 31, 2, 2018.
M. Maggipinto, et al. “DeepVM: A Deep Learning-based Approach with Automatic Feature Extraction for 2D Input Data Virtual Metrology” Journal of Process Control, 84, 24-34, 2019.
Yijie Liu, et al. “Towards Smart Scanning Probe Lithography: A Framework Acceleration Nanofabrication Process with in-situ Characterisation via Machine Learning” Microsystems & Nanoengineering, 9, 128, 2023. 
Resilient Resource Allocation in Dynamic Settings under Uncertainty (RRADSU) AI for Transportation and LogisticsAllocation of scare resources is prevalent in our lives. From allocating drivers and vans for delivery of goods, to the allocation of beds and doctors in hospitals, to the allocation of paramedics and ambulances in disaster response. Take Emergency Services as an example. They play a critical role by dispatching vehicles and qualified professionals to emergency situations. While providing skilled and well-equipped staff in a short time is key in most scenarios, they operate with limited resources and hence it is crucial to ensure that right level of resources is used to deal with a situation, while maintaining the resilience of the system for upcoming emergencies.

An efficient allocation of resources makes the best use of the limited resources available and consequently enhances the system’s sustainability. In many applications, it is important for the users to perceive the allocation as ‘’fair’’ for the allocation mechanism to be deployed and sustained successfully. Efficiency and fairness have been studied extensively. However, most studies make assumptions that do not hold in many real-life scenarios. An allocation is resilient if, should a problem occur (e.g. an ambulance breaks down and becomes unavailable), it can be amended with minimal loss to efficiency and fairness. Very few studies address resilience in limited settings.

We aim to address the existing limitations of literature by considering settings that are dynamic and incorporate uncertainty.

(1) Dynamic: The set of resources and tasks/agents (e.g. ambulances and patients) change over time. Allocation of resources are temporal (e.g. an ambulance is allocated to a patient for a length of time and will be available again after the patient reaches the hospital). Resources may have spatial attributes and their location plays a role in how an allocation should be made and how to evaluate the quality of an allocation.

(2) Partial and dynamic preferences: Agents’ preferences over available resources, and resources’ priorities over agents, may not be known to the system designer or even to the agents and resources themselves. There could be privacy or security concerns which raise the need to design systems that can produce good results without access to full preferences. Preferences can change over time when more information becomes available or the status of the agent changes. For example, a patient in an emergency may be happy to go to any hospital with an available bed, but when the situation stabilises, they prefer to stay in a hospital closer to home.

The main goal of the project is to use multi-agent systems and machine learning techniques to design fair, efficient and resilient allocation of scarce, and possibly heavily constrained, resources in dynamic settings with partial preferences. Our research objective could include (1) formalising what constitutes a resilient mobility/logistics system, (2) analysing the trade-off between resilience and efficiency and fairness, (3) investigating the resilient, efficient and fair allocation of scare resources, (4) utilising agent-based simulations to evaluate various coordination and resource allocation mechanisms or policies. The specifics of the project can be adapted to the PhD candidate's skills and interests.

Relevant Publications

Akbarpour, Li and Oveis Gharan, "Thickness and information in dynamic matching markets", J. Political Econ, 2019.

Garg and Murhekar: "On fair and efficient allocations of indivisible goods", AAAI, 2021

Zeng and Psomas, "Fairness-Efficiency Tradeoffs in Dynamic Fair Division", EC, 2020

Lodi, Olivier, et al. "Fairness over time in dynamic resource allocation with an application in healthcare". arXiv, 2022

Yang, R., Ford, B. J., Tambe, M., & Lemieux, A. (2014, May). Adaptive resource allocation for wildlife protection against illegal poachers. In AAAMAS (pp. 453-460).
Sustainable Generative AI ModelsSustainable AIProject Summary

This project aims to pioneer advancements in the efficiency of Generative AI Models (GenAI), focusing on achieving faster inference times and reduced model sizes without compromising performance. As GenAI become increasingly central to a wide range of applications, from generating images to generating videos and music, their computational demand and the time required for training and inference have escalated. This research seeks to address these challenges by developing innovative techniques for efficiency, including architectural innovations, compression strategies, algorithmic improvements, and system level optimizations. The goal is to enable the deployment of state-of-the-art GAI models across broader scenarios of computing environments, from high-end servers to consumer-level machines. This project will contribute to making GenAI more democratic, efficient, and scalable, paving the way for their application in real-time and resource-constrained scenarios.


Objectives

1. To develop cutting-edge techniques for model compression, such as pruning, quantization, and knowledge distillation, tailored for GenAI models.
2. To design and experiment with new GenAI architectures that are more efficient, requiring less computational power and memory.
3. To create new algorithms and system wide optimizations to accelerate both training and inference processes for GenAI, making them more suitable for deployment across a variety of computing environments.
4. To develop and utilize benchmarks and metrics specifically designed to evaluate the efficiency and performance of GenAI under various computational constraints.

Methodology

Via a thorough literature review and benchmarking of existing models, we aim to identify key limitations and research gaps ripe for advancement. Possible solutions may relate to tailored pruning techniques, which selectively remove less critical parameters, alongside quantization methods that lower the precision of numerical values to significantly decrease model sizes. These optimization methods are still under-explored in the area of GenAI. Moreover, knowledge distillation will be utilized to investigate the potential for subgraph optimization. We also aim to investigate potential of distributed computing along with memory reuse and caching. These strategic directions are anticipated to reduce inference times and computational demands, without sacrificing the performance of the models.

You might be also asked to explore new model architectures designed to balance computational efficiency with robust performance, focusing on the potential of subgraph decomposition to break down complex models into simpler, more manageable components. This strategy aims to facilitate more efficient processing and parallelization. Enhancing the underlying algorithms of GenAI models, such as optimizing attention mechanisms and streamlining data processing, will be another crucial aspect for improving overall model responsiveness and efficiency. Additionally, the project will investigate adaptive deployment strategies, enabling models to dynamically adjust their computational complexity in real-time, ensuring optimal performance across diverse hardware environments. We will conduct comprehensive experimental validation and testing, this approach seeks to push the boundaries of what's possible with GenAI, setting new standards for efficiency and accessibility in AI technologies.

References
[1] Li, Yanyu, et al. "Snapfusion: Text-to-image diffusion model on mobile devices within two seconds." Advances in Neural Information Processing Systems 36 (2024).
[2] Li, Yanjing, et al. "Q-dm: An efficient low-bit quantized diffusion model." Advances in Neural Information Processing Systems 36 (2024)
[3] Wang, Yunke, et al. "Learning to schedule in diffusion probabilistic models." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.
Sustainable manufacturing of AI hardwareSustainable AIAI is entering our everyday lives at an enormously growing pace but at a huge environmental cost. The energy required to run the AI algorithms that are used, for example, for image generation consume large amounts of energy. It is necessary to introduce alternative approaches with lower environmental impact.

Neuromorphic engineering offers a solution to this problem. The development of electronic devices that can realistically emulate biological neural networks holds promise for significantly reducing the energetic footprint and lowering the CO2 emissions generated by current AI hardware.

The aim of this project is to develop a new form of neuromorphic systems that merge photonic, electronic and ionic effects, bringing new prospects for in-memory computing and artificial visual memory applications. This will be achieved upon developing photoelectric memories fabricated with more sustainable processes and greener materials. First, you will develop single devices that emulate biological synapses in the human visual system, capable of detecting (in analogy to the retina in the eye) and memorising or even processing images (like the visual cortex in the brain). Then, you will design and implement a novel neuromorphic optoelectronic array that will perform certain neuromorphic functionalities, e.g., pattern recognition tasks. Finally, you will assess the sustainability of this approach by applying life-cycle assessment techniques to each step of the fabrication process.

This novel electronic technology can effectively emulate synaptic weights and may be programmable both via light and voltage. This provides additional flexibility for implementing both synaptic weight updates as well as homeostatic effects. Furthermore, the technology relies on low-temperature processes and can thus be integrated on flexible substrates, which paves the way to incorporation of AI functionalities to wearable devices.

The outcomes of your research can have various applications, such as Internet of Things (IoT) devices for visual data communication, human/environment detection/tracking, Augmented/Virtual Reality, etc
Efficient Machine Learning for Space ApplicationsSustainable AIMotivation: The increasing deployment of satellites in Low Earth Orbit (LEO) has led to a rise in data generation, particularly in fields of Earth observation, telecommunications, and global internet coverage. Recent trends show a shift toward satellite constellations, where numerous small satellites work together to achieve broader coverage and higher data throughput. This shift necessitates advanced onboard processing to manage the vast amounts of data generated while addressing the constraints of limited bandwidth, power, and computational resources. To meet these challenges, there is an urgent need to develop efficient machine learning (ML) techniques tailored for space applications, enabling satellites to process data in real-time, reduce reliance on ground stations, and optimise operational efficiency.

Research Problem: The main challenge is to design and implement machine learning algorithms that can operate efficiently within the constrained environment of space-based platforms, particularly LEO satellites. The research will address the following key problems:
1. Computational Constraints: LEO satellites have limited processing power, necessitating lightweight ML models.
2. Data Transmission: Selective data transmission strategy for better efficiency.
3. Distributed Computation: Leveraging the distributed nature of satellite constellations for collaborative data processing.
4. Robustness and Reliability: Ensuring ML models are resilient to the harsh conditions and variability (thermal or daylight) in space.

Methodology:
1. Optimization of ML Operations: Design energy-efficient, lightweight ML models through techniques such as model compression, quantisation, and pruning to fit the limited resources of LEO satellites.
2. Distributed Computation Framework: Create a framework for distributed computation across satellite constellations, focusing on efficient task allocation, data/model partitioning, and aggregation to maximize the collective processing power.
3. Efficient Data Transmission Protocols: Develop protocols prioritizing the transmission of essential information while using edge computing techniques to process less critical data onboard, reducing the load on communication channels.
4. Federated Learning for Space Systems: Develop and refine federated learning algorithms allowing multiple LEO satellites to collaboratively train ML models without sharing raw data. This reduces data transmission needs while maintaining data privacy.

Expected Outcomes: The research will yield advanced ML techniques optimized for LEO satellite systems, enhancing onboard data processing capabilities, reducing dependence on ground-based infrastructure, and improving the efficiency and autonomy of space missions.

References -

[1] Denby, Bradley, et al. "Kodan: Addressing the computational bottleneck in space." Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3. 2023.
[2] Liu, Weisen, et al. "In-Orbit Processing or Not? Sunlight-Aware Task Scheduling for Energy-Efficient Space Edge Computing Networks." IEEE INFOCOM 2024-IEEE Conference on Computer Communications. IEEE, 2024.
[3] Chen, Yijie, et al. "Energy and Time-Aware Inference Offloading for DNN-based Applications in LEO Satellites." 2023 IEEE 31st International Conference on Network Protocols (ICNP). IEEE, 2023.
[4] Xing, Ruolin, et al. "Deciphering the enigma of satellite computing with cots devices: Measurement and analysis." Proceedings of the 30th Annual International Conference on Mobile Computing and Networking. 2024.
Neurosymbolic Machine Learning for Distributed Fibre Optic Sensing (DFOS)Sustainable AIThis PhD project focuses on leveraging Neurosymbolic Machine Learning (ML) to drive advancements in Sustainable AI, particularly by optimising the efficiency of AI systems to reduce energy consumption. Neurosymbolic ML, which combines the interpretability of symbolic AI with the flexibility of subsymbolic techniques like deep learning, is a promising approach to achieving sustainability in AI. Symbolic reasoning, with lower computational demands, can significantly reduce the resource consumption of AI models, making them more eco-friendly without sacrificing accuracy or adaptability.

The focus of this PhD project will be on using neurosymbolic ML methods to analyse Distributed Fibre Optic Sensing (DFOS) data. DFOS technology is capable of providing highly granular data on environmental vibrations, structural integrity, and other signals critical for monitoring urban infrastructure and natural environments. One of the key challenges of this technology is the vast density of data, which needs massive storage solutions, as well as computational power to analyse it. The student will explore whether neurosymbolic methods can be leveraged to reduce the data storage and computational resources necessary to work with this type of data.

This research will be tightly aligned with several research projects across the University and the National Oceanography Centre (NOC), using DFOS collected in different environments. In particular, the student will benefit from involvement in a new grant starting in January 2025 which will collect DFOS data in the cities of Southampton and London, getting prime access to a novel and one-of-a-kind dataset. This grant will also count with collaborators from different fields, from social sciences to humanities, to collectively analyse the impacts of DFOS data in urban settings. Students with interest in impact and interdisciplinary research will find opportunities for career progression in these areas. Moreover, the student will also have access to data held at NOC, collected through different campaigns in marine environments. The availability of these diverse data sources will enable the investigation of noise-tolerant sustainable ML models capable of operating in multiparameter environments, such as those influenced by urban infrastructure or marine ecosystems.

The main research challenge lies in developing event detection systems that can operate effectively in complex, noisy environments while maintaining low energy consumption. Traditional deep learning methods, though powerful, require significant computational resources, especially when applied to large datasets like those generated by DFOS platforms. Neurosymbolic ML, by contrast, offers a more computationally efficient approach, enabling the development of models that can detect and classify events with reduced energy demands.

You will be supervised by a team of interdisciplinary researchers in machine learning, signal processing and distributed systems, and will have the opportunity to collaborate with industry partners to further your research. You will join the School of Electronics and Computer Science in collaboration with the National Oceanography Centre (NOC), and will have professional development opportunities through the Alan Turing Institute, the UKRI TAS Hub, Responsible AI UK (Rai UK) and access to Future Worlds to explore commercialisation for your research.

While the primary theme of this project is Sustainable AI, it also has broader implications for other SustAI CDT themes. DFOS data can be applied to sectors such as Transportation and Logistics and Energy and Buildings, where real-time monitoring and optimisation can reduce resource consumption and enhance sustainability. For instance, DFOS technology could help optimise smart grids or monitor structural integrity in transport networks, opening new avenues for interdisciplinary applications.

Sustainable data management and ethical AI governance for DFOS in smart citiesSustainable AIThis PhD project focuses on addressing the sustainability challenges associated with AI-augmented Distributed Fibre Optic Sensing (DFOS) by developing sustainable data management techniques and ensuring compliance with ethical and legal standards for AI deployment. DFOS systems, which could be implemented for monitoring urban infrastructure and environmental conditions, generate massive amounts of data that are resource-intensive to store and process. This poses a significant challenge in terms of energy consumption, data management, and privacy protection. The project aims to tackle both the technical and governance aspects, ensuring that DFOS technology is used efficiently and responsibly.

The focus of this PhD will be on implementing energy-efficient data management solutions for DFOS while developing governance frameworks that ensure ethical and legal compliance with regulations like GDPR.

The technical side of this project will focus on developing data minimisation techniques to reduce the volume of data generated by DFOS systems without compromising the integrity of urban monitoring. This will involve creating methods for compressing and selectively storing data, ensuring that only relevant information is retained while reducing energy-intensive storage and processing. The student will explore adaptive retention policies that retain data only when necessary, such as for specific regulatory requirements or infrastructure analysis, and deleting or anonymising non-essential data to reduce storage costs and ensure compliance with data protection laws (e.g., removing footsteps or voices from data before analysing).

The governance side of the project will focus on developing frameworks for ethical and sustainable AI deployment, ensuring that the use of DFOS aligns with both environmental and societal goals. In particular, it will address privacy concerns, data ownership, and GDPR compliance while promoting responsible data practices that minimise unnecessary data collection and storage, reducing the system’s overall energy footprint. As DFOS can capture sensitive information related to human activity, it is crucial to handle this data in a way that not only protects individuals' rights but also supports sustainable urban management by optimising data use and reducing waste, contributing to a more efficient and eco-friendly monitoring system.

This project will be closely aligned with ongoing research projects that collect DFOS data from urban settings in London and Southampton as part of a new grant starting in January 2025. The student will gain access to this unique dataset and work with interdisciplinary teams, including social scientists and humanities experts, to ensure the technology’s sustainability from both technical and governance perspectives. Additionally, the student will have the opportunity to collaborate with researchers working on smart cities, privacy law, and sustainable AI, ensuring a broad and interdisciplinary approach to the research.

You will be supervised by a team of interdisciplinary researchers in machine learning, data science, DFOS and public policy, and have the opportunity to collaborate with industry and public sector partners interested in smart city technologies. The project offers professional development opportunities through partnerships with organisations such as the Alan Turing Institute, or Responsible AI UK (RAi UK). The student will also benefit from involvement in commercial innovation programs, such as Future Worlds, to explore potential applications and impacts of their research.
An AI-Supported Mechanical Testing Protocol for Accelerating Materials QualificationAI for Sustainable Operations and Circular EconomyOur vision is to accelerate high-fidelity validation of new, more durable materials, reducing material usage, increasing resource efficiency, minimising premature failures, and lowering operational costs for high-value manufactured products.

High-temperature materials performance involves a complex interplay of factors, including creep, fatigue, oxidation, and microstructural stability [1]. This critically limits the longevity of many high-value technologies, infrastructures, and engineered products, from electricity generation to transportation and industrial processes. The development cycle for new materials, especially for safety-critical applications, typically spans 10-20 years. This process requires rigorous validation through testing at the lab scale before scaling up to production. However, the efficiency of validation is significantly hindered by existing testing methodologies, which are laborious, time-consuming, and costly [2]. These methods are low-throughput, often evaluating one alloy composition at a time [3].

This project aims to pioneer a data-rich, high-temperature testing protocol by integrating heterogeneous testing (evaluating multiple compositions within a single test) with predictive and generative AI techniques. The goal is to accelerate the qualification of sustainable stainless steels with enhanced durability. The key objectives are:
• Develop a high-throughput testing procedure integrated with full-field strain measurements at high temperature, to efficiently evaluate material performance.
• Create AI models (e.g., an established AI model based on 2-point statistics and principal component analysis techniques [4], as well as a more exploratory AI model called generative adversarial networks [5]) to automatically quantify key microstructural features and their evolutions before and after testing.
• Establish data-driven models with enhanced predictive capabilities to assess material damage sensitivity and recommend the next test to further refine accuracy.

The methodology and associated tasks are described below.

Task 1: We will fabricate a stainless steel sample with varied compositions. Using scanning electron microscopy (SEM), we will capture high-resolution images, focusing on the grain-level microstructural features at 50 distinct points, each representing a unique composition. After subjecting the sample to high-temperature testing for a given time, we will re-image the same areas to observe microstructural changes. These images will be divided into smaller sections, resulting in approximately 50,000 paired images before and after testing.

Task 2: We will run established algorithms to identify regions where cracks or other forms of damage have occurred. Using these data, we will train AI deep-learning methods to predict damage based on the initial microstructure and composition. By analysing how the microstructure evolves, we aim to understand the relationship between composition and durability.

Task 3: we will employ advanced AI techniques, such as generative adversarial networks (GANs), to predict the behaviour of new, untested microstructures. This approach not only accelerates material development but also aligns with sustainable AI practices by minimising the need for extensive physical testing.

AI will streamline the entire material testing process, including the collection and processing of large volumes of microstructure data, ultimately delivering faster and more accurate prognostics. While the focus is on stainless steel, the AI-enhanced end-to-end material testing protocol developed by this project is designed to be adaptable, offering benefits well beyond this material type.

References:
1. https://doi.org/10.1038/s41578-022-00496-z
2. https://www.royce.ac.uk/collaborate/roadmapping-landscaping/materials-4-0/
3. https://doi.org/10.1016/j.jnucmat.2021.153425
4. https://doi.org/10.1038/s41598-019-39315-x
5. https://doi.org/10.1038/s41524-023-01042-3
Fabrication and modelling of artificial synapses for neuromorphic computing Sustainable AIAI computation using today’s technologies consumes significant energy. CPU, sensor, and memory are the main building blocks of computation; among these three, memory is a power-hungry component.[1] Despite great efforts to develop low-powered memories for AI hardware, the fabrication and operation of these devices still have high carbon and silicon footprints.[2] We need to invent a new memory technology that is more sustainable.

Memristor nanoelectronics are emerging memory devices capable of mimicking the synaptic plasticity of the mammalian brain (known as neuromorphic). Memristors offer the potential for realising fast, small, and low-powered AI machines.[3] The transfer of information between neurons is based on the transport of chemical ions between synapses; similarly, memristors can be programmed to remember and forget information by tuning the transport of ions in the device via reduction-oxidation reactions induced by an electric field, thus called artificial synapses.[4]

However, we still do not know the dynamic between the electric field and ionic movement governing the strength of the connection between these synapses (synaptic plasticity). This fundamental question is critical to realise efficient programming/algorithms to achieve optimal computing capacity.

This PhD project aims to answer this fundamental question, where you will develop memristor artificial synapses and build simulation modelling of their synaptic plasticity. We will train you how to design a massive array of fully functional memristor prototypes and simulate how diffusion and drift of ions/defects could lead to synaptic plasticity. The final objective of this project is to model the impact of materials/interfaces and electrical parameters on various types of plasticity.

You will be supervised by Dr Firman Simanjuntak (memristor technology) and Dr Iris Nandhakumar (electrochemical theory and modelling). You will be part of the Sustainable Electronics Technologies (School of Electronics & Computer Science) and Electrochemistry (Chemistry & Chemical Engineering) research groups, the leading research groups in the UK, offering unique solutions to real-world problems by delivering efficient electronics while addressing all aspects of sustainability.
You will have access to our state-of-the-art cleanroom, materials, and electrical characterisation facilities, and we have excellent research staff and technicians who will support your research (https://www.southampton-nanofab.com/). We will encourage you to attend international conferences in the UK and abroad to present your research work and guide you in publishing your results in high-impact journals.
We are seeking exceptional candidates to join our team and interested in devoting their passion to addressing some of the challenges we have identified.

1. Indiveri, G. & Liu, S. C. Memory and Information Processing in Neuromorphic Systems. Proc. IEEE 103, 1379–1397 (2015).
2. Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).
3. Prinzie, J., Simanjuntak, F. M., Leroux, P. & Prodromakis, T. Low-power electronic technologies for harsh radiation environments. Nat. Electron. 4, 243–253 (2021).
4. Simanjuntak, F. M. et al. Conduction channel configuration controlled digital and analog response in TiO 2 -based inorganic memristive artificial synapses. APL Mater. 9, 121103 (2021).
Predictive Modelling and Management of Invasive SpeciesAI for the Natural EnvironmentThe increasing frequency of environmental disruptions caused by invasive species poses significant threats to global biodiversity, ecosystem services, and human livelihoods. These phenomena are exacerbated by climate change and human activities, leading to disruptions in aquatic ecosystems, including altered food webs, reduced water quality, and negative socio-economic impacts (Dominguez Almela et al. 2024). Traditional monitoring and management approaches struggle to keep pace with these rapidly evolving challenges (Dominguez Almela et al. 2023). However, AI has the potential to transform how we predict, monitor, and manage biodiversity threats, enabling proactive and data-driven conservation efforts.

This project seeks to address the limitations of current management of the invasive brown seaweed sargassum, which is blooming in the tropical Atlantic and affecting biodiversity, society, and economy in coastal regions of the Caribbean and West Africa (Marsh et al. 2023). By harnessing AI, the project aims to improve predictive modelling, real-time monitoring, and the effectiveness of management interventions. The main research questions are:
(1) How can AI-driven models be developed for sargassum to predict its spread?
(2) What factors most significantly influence these environmental disruptions, and how can AI integrate these factors to improve prediction accuracy?
(3) How can AI be applied to develop real-time monitoring tools that process large-scale environmental data to detect early warning signs of ecosystem disruptions?
(4) How can AI simulations optimize different management strategies, balancing ecosystem conservation with socio-economic activities like fisheries and tourism?

Methods:
(a) Data collection: combining remote sensing, public environmental data (e.g. GBIF, NOAA), and citizen science platforms (e.g. CoastSnap) to gather large-scale datasets on species distributions, water quality, sargassum biomass accumulation, and other relevant ecological variables. Collaborate with non-academic partners such as environmental agencies (e.g. CONANP Mexico and EPA Ghana) and marine conservation organizations (e.g. Sand Dollar) to access real-time and historical environmental monitoring data.
(b) AI model development: develop machine learning models (e.g., neural networks, random forests) to predict the spread and impact of sargassum proliferations. Integrate ecological, climatic, and socio-economic data to enhance model predictions, with a focus on the drivers of invasion success and biomass growth.
(c) Real-time monitoring tools: build AI tools capable of processing satellite imagery and sensor data for real-time detection of ecosystem disruptions, allowing for early intervention and mitigation efforts. Implement deep learning techniques for automated species identification and biomass estimation from visual data.
(d) Simulation and optimization of management strategies: Utilize agent-based modelling and AI-driven simulations to test various conservation and management scenarios under different environmental conditions (Dominguez Almela et al. 2020; 2021). Collaborate with policymakers and resource managers to ensure that the AI tools developed are practical and can inform decision-making processes.

Dominguez Almela, et al. (2020) Biological Invasions, 22(5), 1461–1480.
Dominguez Almela, et al. (2021) Journal of Applied Ecology, 58, 2427–2440.
Dominguez Almela, et al. (2023) Environmental Research Letters, 18(1), 061002.
Dominguez Almela et al. (2024) Environmental Research Letters, 19, 013003
Marsh et al. (2023) PLOS Climate, 2 (7), e0000253.
AI for Marine Biodiversity MonitoringAI for the Natural EnvironmentStudying the ocean and its inhabitants is an extremely challenging task: the animals and the physics that govern their habitat vary on an enormous range of spatiotemporal scales, and this renders monitoring from crewed ships infeasible. Scientists are increasingly relying on Autonomous Underwater Vehicles (AUVs) coupled with imaging systems to observe marine ecosystems. However, these tools are typically deployed to run a preset transect continually collecting data for later processing, removing opportunities to react in real-time to observed data. The potential for adaptive sampling on AUVs from image data is enormous, enabling creative sampling paradigms to better understand our changing oceans, enabling a single robot to stop and follow a new organism, or a system of robots could track and interrogate ephemeral biological features like thin layers of plankton. Such functionality would allow scientists to consider and study new ideas in biological oceanography. While embedded AI is becoming increasingly feasible to enable this, the associated power consumption has a significant impact on vehicle battery-life and hence the length of a mission.

This project seeks to capitalise on advances in multimodal sensing and dynamic inference, to make on-board decisions to enable efficient and adaptive sampling/control based on real-time collected data. This will entail both processing the visual data itself and producing outputs that are actionable for mission planning. There are several interesting challenges to address and potential directions and PhD project opportunities. The project will explore both new techniques in computer vision to enable AI on the “edge” i.e. within AUVs, as well AI-enabled scheduling on-board the robot.

In addition, techniques from swarm robotics and multi-agent reinforcement learning can be used to coordinate multiple AUVs where there is limited ability to communicate. This can furthermore be supplemented with static sensors from e.g. a crewed ship. Novel approaches need to be designed for path planning and to build a collective map of marine populations using a diverse set of vehicles and sensors.
Machine learning to guide new solar energy technologies AI for Sustainable Energy and BuildingsGlobal energy demands are rising, and there is a desperate need to generate and store energy renewable energy. Solar panels can store sunlight as electricity however, heating comprises almost half of the world’s energy usage there remains a need for generating and storing renewable energy as heat to reduce our reliance on natural gas.(1)

Molecular solar thermal (MOST) energy systems offer a solution to this problem. MOST relies on molecular photoswitches, molecules which in response to light convert into a (meta)-stable higher energy state, storing the light energy as chemical energy which can be released on-demand as heat.(2) MOST device to be retrofitted atop solar panels, storing extra solar energy as heat whilst also increasing the efficiency of photovoltaic electricity generation.(3) However, difficult synthesis of photoswitches limit the translation of MOST to the real world.(2)

Our research group has recently discovered a new class of photoswitch, the N-amino N-heterocyclic hydrazones. These are easily synthesised in a single step from commercial starting materials and show promising properties as MOST properties, however their novelty also limits our understanding of structure-function relationships.

We seek to overcome this by developing a machine learning model which can predict key photoswitching properties based on molecular structure, to guide synthetic design towards ideal photoswitches for MOST systems. The student will work closely with our synthetic team in the School of Chemistry for model validation.


WP1 Development

A growing library of NANs and the associated photoswitching data is available within the group; by the time this project begins it is anticipated that there will be >100 molecules and associated datasets. Structural descriptors will be parameterised and related to experimentally determined photoswitching properties. Quantum mechanical calculations will be conducted to determine excited state properties which will also be included as input parameters. A machine learning model will then be developed towards predicting structures with specific properties.(4)


WP2 Refinement

Reliability will be improved by model guided design of new switches, to improve the predictive ability. Using the existing dataset, transfer learning approaches will be implemented to identify parameter combinations which may reduce dataset requirements.(5) This would allow the application of the model to other photoswitch classes.


WP3 Prediction and implementation

With a refined model in-hand, a photoswitch with ideal parameters for a MOST system will be designed. The top five lead candidates will be synthesised on a small scale and tested. The final lead candidate will be synthesised on a larger scale for use in the development of a small prototype device.


The student undertaking this project will have the option to diversify their skill and learn synthetic chemistry and photophysical characterisation or focus purely on computational analysis and machine learning model development. They be trained to conduct quantum mechanical calculations and will be given the opportunity to travel to Uppsala University (Sweden) for collaborations with Dr Stefano Crespi, a world leader in this field.



1 Energy Procedia., 2013, 33, 311-321

2 Joule, 2021, 5, 3116-3136

3 Joule, 2024, 8, 1-16.

4 Chem. Sci., 2022, 13, 13541-13551

5 Digital Discovery, 2023,2, 941-951
AI for Sustainable Shipping OperationsAI for Sustainable Operations and Circular EconomyAI for sustainable shipping operations

The shipping industry currently accounts for 3% of global emissions [1] and was responsible for 1 billion tonnes of CO2 emissions in 2012 [2]. Reducing these emissions is of high priority for many organisations including the EU (cf. the Monitoring, Reporting and Verification Maritime Regulation, [3]), Unitied Nations (via the International Maritime Organisation, [4]) and companies such as BP [5]. In shipping, biofouling refers to the buildup of biological material (e.g. plants or small animals) on ships’ hulls. Biofouling can have a significant impact on ships’ efficiency by introducing additional drag, due to the rougher or more irregular hull of the ship. This can cause as much as a 55% increase in emissions in extreme cases [4]. If biofouling can be mitigated it will have a significant sustainability impact, by (i) reducing emissions from shipping and (ii) reducing fuel consumption and thereby reducing pressure on fossil fuel reserves.

The aim of this project is to use AI to predict and understand how ship operational performance is affected by biofouling, and how it can most efficiently be mitigated. To accomplish the student working on this project will explore several approaches, including the following ones:
1. Models for biofouling: Use of statistical learning tools to estimate factors leading to rapid biofouling. Then use these factors to construct prediction algorithms to assess how severe biofouling can be on a vessel, in order to quantify its potential damages on the environment; for example, build a regression framework, to assess fuel efficiency, which incorporates biofouling through fundamental physics (e.g. using physics-informed machine learning) or by estimating the extent of biofouling directly from images of the vessel’s hull.
2. Biofouling mitigation: Being able to accurately predict biofouling and its potential consequences will inform on the best mitigation strategies, which will be investigated based on parameters such as sailing route planning, paint coatings, weather conditions or longer waiting times at ports. In a similar vein we will investigate whether there are particular “tipping points” associated with a jump in fuel consumption, to identify optimal hull cleaning times.
3. Validation of predictions: We will design experiments that can be implemented in practice to statistically test the impact of our recommendations in a real fleet of vessels, to present to the project partner.

The partner on this project will be Carisbrooke Shipping Ltd, an Isle-of-Wight-based shipping company that operates a fleet of 27 vessels and is highly motivated to investigate and address biofouling due to its emissions impact. A representative from Carisbrooke, Natalia Walker, will serve as an external consultant within the supervisory team. Natalia will join supervisory meetings, as necessary, to provide problem domain knowledge to support the modelling approaches. Carisbrooke will also provide monitoring data from their entire fleet, to serve as a training set, and advise on practicality and applicability of assumptions and conclusions drawn from our modelling and numerical results. Carisbrooke are also open to implementing recommendations based on the outcomes of this project in their fleet of vessels, providing a direct route to impact. If successful, this project could have impact in the global shipping industry, informing policy decisions to meet emissions targets.


[1] International Maritime Organisation, "Fourth IMO greenhouse gas study," 2020.
[2] L. Huang, B. Pena, Y. Liu and E. Anderlini, "Machine learning in sustainable ship design and operation: A review," Ocean Engineering, vol. 266, no. 12, 2022.
[3] Publications Office of the European Union, "Monitoring, reporting and verification of ships’ emissions," 2023.
[4] International Maritime Organisation, "Impact of ships' biofouling on greenhous gas emissions," 2022.
[5] British Petroleum, "Low Carbon Shippin
Variational Probabilistic Numerical MethodsSustainable AITraining AI models is associated with a massive energy cost, and a crucial part of making these models sustainable is reducing this cost. An emerging tool that could help with this is probabilistic numerical methods (PNM); these are numerical methods that come equipped with probabilistic quantification of the level of accuracy. This can allow users to use lower fidelity models (e.g. high tolerance linear solvers, or coarsely discretised PDEs, as commonly used in digital twins [1]) while still obtaining trustworthy results, because of the error quantification. This has been demonstrated in several published works, e.g. in industrial process monitoring [2], where use of a PNM allowed for a 90% reduction in computational effort compared to using standard, high accuracy solvers.

A significant factor that hampers further uptake of PNM is that the current state-of-the-art is only efficient for a narrow class of problems of limited practical interest. This makes it impossible to apply PNM to really challenging problems with huge computational cost and non-negligible error, such as in ocean and climate modelling.

To mitigate this, in this project we will apply the common paradigm in Bayesian statistics of variational inference (VI). In VI, we build an approximate distribution that is “close” to the true distribution in a mathematically well-defined way, but which is more computationally convenient to work with. By developing “Variational PNM” we will expand applicability of PNM to a much wider class of problems. Our goal is to apply variational inference to ocean and climate models, such as the well-known MITgcm model [3]. We anticipate seeking collaboration with members of NOC in later stages of the project to facilitate this.

[1] Niederer, S. A., Sacks, M. S., Girolami, M., & Willcox, K. (2021). Scaling digital twins from the artisanal to the industrial. Nature Computational Science, 1(5), 313-320.
[2] Chris J. Oates, Jon Cockayne, Robert G. Aykroyd & Mark Girolami (2019) Bayesian Probabilistic Numerical Methods in Time-Dependent State Estimation for Industrial Hydrocyclone Equipment, Journal of the American Statistical Association, 114:528, 1518-1531, DOI: 10.1080/01621459.2019.1574583
[3] MITgcm. (n.d.). GitHub - MITGCM/MITGCM: M.I.T General Circulation Model Master Code and Documentation Repository. GitHub. https://github.com/MITgcm/MITgcm

Interconnectivity between power / heat and grid constraints in UK citiesAI for Sustainable Energy and BuildingsOne of the main challenges toward the UK transition to net-zero is to balance the energy demand for power & heat and the gird constraints. This project will aim to establish:
- the current purpose, patterns and scale of energy demand in UK cities
- the underlying local environmental, social and economic trends have affected these patterns over time
- the future policy which are likely to affect these trends
This will lead to generating city model with synthetic data.
AI for large scale building retrofit optimisation strategies to meet decarbonisation targetsAI for Sustainable Energy and BuildingsThe urgent need to mitigate climate change has placed a spotlight on reducing energy demand within the building sector, which accounts for a significant portion of global energy consumption and carbon emissions. Traditional retrofit approaches often focus on individual technologies, lacking the integration necessary for optimal efficiency and unable to exploit the economies of scale that could improve the cost-effectiveness of retrofit interventions. This project addresses this gap by developing a multifunctional toolkit designed to facilitate large-scale building retrofit interventions that are cost-optimal with respect to investment, carbon reduction, and primary energy savings.
The proposed research aims to create a comprehensive toolkit that seamlessly integrates various energy efficiency measures and technologies. These include enhancements to building envelopes, installation of heat pumps, incorporation of photovoltaic systems, deployment of advanced energy management and control systems, and utilization of thermal and electric storage solutions. By combining these elements, the toolkit will enable a holistic approach to retrofitting that maximizes energy savings and minimizes carbon footprints across a wide array of buildings.
The research methodology entails the creation of hybrid models that incorporate physics, machine learning, and statistics. These models must be capable of managing the complexities associated with the integration of multiple technologies across a variety of building types. As part of the research activity extensive datasets will be analysed on building performance, occupant behaviour, and energy consumption patterns by leveraging the models that have been developed. The insights identified in the modelling and analysis process will inform the optimization process, allowing the toolkit to recommend tailored retrofit strategies that align with specific building characteristics and usage patterns.
To ensure practical applicability, the toolkit will be tested through large-scale trials funded by the Local Authority Delivery scheme (LAD) in Hampshire. This real-world validation will provide critical feedback for refining the toolkit and demonstrate its effectiveness in achieving energy and carbon reduction goals at scale, verified by means of building monitoring. Collaboration with local authorities and industry stakeholders will facilitate the deployment of the toolkit and support the transition toward more sustainable building practices.
Prediction and optimization of electrical demand over varying timescales for mini-grids in sub-Saharan Africa AI for Sustainable Energy and BuildingsGlobally there are around 800 million people without access to electricity with around 600 million living in Sub Saharan Africa. The Energy for Development (e4D) programme at the University of Southampton was created in 2010 to address this challenge and initiated seminal studies in electricity access for hard-to-reach poor areas in Sub-Saharan Africa and beyond [1]. This included the design and construction of five solar PV based mini-grids with partners in Kenya and Uganda. The supervisory team continues to actively monitor and support the operation of these mini-grids. One key observation over the greater than ten years’ operation has been very different rates of growth in energy demand in different locations, despite broadly similar characteristics [2]. This clearly has implications for cost benefit analysis and system component lifetimes.
Within this programme, this project aims to further the understanding of electricity demand, its growth and optimisation within mini-grids in Kenya and Uganda. Specific challenges include:
- Estimation of electricity consumption profiles for households and businesses of different types based on measured datasets: these can be used to train models and construct synthetic datasets.
- Prediction of electricity demand growth over mini-grid lifetime: this is a key aspect of the cost-benefit analysis for mini-grid projects.
- Risk analysis of combined loads exceeding design thresholds: given the contained nature of mini-grids, this is critical for ensuring systems do not fail earlier than predicted.
- Optimal demand management in mini-grids under conditions of constrained generation and storage capacity: how to manage and control heavier loads to maximize availability and overall benefit to mini-grid users.

References
1. https://doi.org/10.3390/en12050778
2. https://doi.org/10.1109/jproc.2019.2924594
Data-driven overheating risk assessment methods to future-proof UK schools’ stockAI for Sustainable Energy and BuildingsClimate change is significantly increasing the frequency and intensity of heatwave events worldwide. In the United Kingdom, climate projections indicate that by 2070, seasonal average temperatures could rise by up to 5.1°C. This escalation poses a substantial risk to the UK's school stock, as higher temperatures can lead to overheating in educational buildings. Overheating not only affects thermal comfort but also has detrimental impacts on students' cognitive performance, health, and overall well-being.
The primary aim of this project is to understand and assess the overheating risk in UK schools, with the objective of future-proofing them in light of the evolving climate. The assessment will consider the multifaceted effects of overheating on students, including decreased cognitive performance, reduced thermal comfort, and potential health issues like heat strain. By evaluating these impacts, the project seeks to indicate the buildings and rooms that are more at risk and suggest potential strategies that can applied both in the short (e.g. behavioural and operational changes) and in the long-term (e.g. building refurbishment and specific technologies).
To achieve this, the research will need to develop an assessment approach capable of spanning from regional to national scales. This will involve collecting and analysing data on local climate, school building typologies, construction technologies, operational and occupancy patterns, as well as other building data, across different regions. Simulation tools (physics-driven methods) and machine learning (data-driven methods) will be combined to model thermal performance at the classroom level under various climate scenarios.
The combined use of physics-driven and data-driven methods will enable the handling of large datasets and the prediction of overheating risks with adequate accuracy at scale. Hampshire County Council schools’ stock will be used as the initial case study to test the scalability of the proposed approach, with the goal of extending its applicability to entire school stock in the UK.
AI to address Scope 3: Procurement & Supply Chain emissions estimation challengeAI for Sustainable Energy and BuildingsAs part of global efforts to reduce carbon emissions, many organisations are utilising the Greenhouse Gas (GHG) Protocol to estimate their carbon emissions. The Protocol is internationally recognised standard for greenhouse gas accounting of emissions into three categories or "scopes". Where Scope 1 covers direct emissions from sources that are owned or controlled by the organisation. Scope 2 focuses on indirect emissions from the consumption of purchased energy, such as electricity. Scope 3 encompasses all other indirect emissions across the organisation’s value chain, including emissions from procurement, business travel, and waste.

Among the three scopes, Scope 3 often represents the largest share of an organisation’s total emissions, typically accounting for 60-80% of overall carbon footprint. Despite its significant impact, many organisations struggle to estimate Scope 3 emissions due to the complexity of collecting reliable data from suppliers, a lack of clarity on estimation methodologies, and the fact that reporting Scope 3 in your overall emissions is largely voluntary in the UK.

One of the most challenging subcategories under Scope 3 is procurement and supply chain emissions. These are difficult to estimate due to constantly changing suppliers and the challenges in obtaining accurate emissions data from them. This research aims to address these challenges and provide appropriate pathways for clearly acceptable solutions that will allow the accurate estimation and reporting of Scope 3 emissions.

The goal here is to develop analytical approaches that are geared to coherently streamline the data, provide clarity for subcategories and modelling that will in better estimation and reporting of Scope 3 emissions. This approach lends itself well for the application of artificial intelligence (AI) and machine learning (ML). The research will initially focus on higher education institutions (HEIs) in the UK, with the potential for broader application across various sectors.

Currently, many HEIs use the Higher Education Supply Chain Emissions Tool (HESCET), which relies on a spend-based method for estimating emissions. However, this method tends to inflate emissions estimates because environmentally friendly (low carbon) products and services are often more expensive. To address this issue, the research will employ ML techniques such as natural language processing (NLP), semi-supervised and unsupervised learning, and advanced modelling to develop a more accurate and data-efficient model. This type of modelling will incorporate both supplier and product/service information to deliver better emissions estimates.

Key objectives of the model include:
• Classifying suppliers into appropriate categories (e.g., Proc HE) based on emissions profiles.
• Weighting low-carbon products/services more accurately, ensuring they are appropriately reflected in emission estimates.
• Applying relevant emission factors to provide more reliable carbon footprint estimates.
• Develop analytical approaches to coherently streamline the data, provide clarity for subcategories and modelling that will in better estimation
• Undertake a comparison between outputs from developed tools to those that currently exit.
• Provide policy guidelines to address the paucity of clear modelling for Scope 3 emissions.

Additionally, the research will be augmented to integrate AI to automate lifecycle analysis (LCA), identifying trends in product use, disposal, and recycling. This will allow for more accurate estimates of downstream emissions, further enhancing the precision of Scope 3 reporting.

Expected Impact
The outcome of this research will be a novel AI- and ML-based framework for estimating Scope 3 emissions that will significantly improves upon existing methods, such as HESCET. This framework will not only enhance the accuracy of carbon accounting but also provide scalable solutions that can be adopted by organisations across different sectors.
AI to address poverty alleviation through rezoning and sustainability of coastal cities and townsAI for Sustainable Energy and BuildingsApproximately 10% of the UK’s population live in coastal cities and towns. While these areas often benefit from rich architectural and cultural heritage, they face intersecting economic, social, and environmental challenges, resulting in significant socio-economic inequities. Many coastal areas lack access to education and employment opportunities while also suffering from digital and transport disconnectedness, typically falling below national averages on economic and social indicators. This project aims to addresses poverty alleviation in coastal cities by developing an AI-driven, data-informed toolkit to support urban policy and rezoning strategies that foster economic revitalization and sustainable development for vulnerable communities.
This research will integrate urban characteristics—such as urban form, population density, health deprivation, air quality, noise levels, environmental conditions, and accessibility—into a model which captures coastal-specific challenges. Through machine learning, this toolkit will model potential urban intervention scenarios, assessing their impacts on poverty levels, environmental sustainability, and resilience. This data-informed approach will provide a practical toolkit for city planners, policymakers, and other decision-makers, guiding rezoning and infrastructure investments to promote economic stability, health, quality of life, and climate resilience. Although initially developed for Southampton, the toolkit aims to provide a replicable framework for sustainable regeneration that can be applied to other coastal cities and towns.
AI directed laser synthesis, patterning and manufacture of 2D TMDC nanodevices AI for Sustainable Operations and Circular EconomyIn this PhD studentship, the candidate will contribute to the development of a sustainable, low cost system exploiting direct laser printing of 2D semiconductor based nanodevices by using AI for additive manufacture to print 2D materials exactly and only where they are needed.

Two-dimensional (2D) materials have attracted global interest for atomically thin next-generation electronic and optoelectronic devices, opening up exciting opportunities for technological applications at the monolayer limit. Their extraordinary properties could revolutionise areas ranging from printed electronics to life sciences, imaging and quantum technologies to name just a few. However, the synthesis of these materials is often complex and capital intensive, relying mainly on vacuum based processing tools. This new technology allows film patterning on various substrates, including flexible and curved, all processed under room temperature ambient conditions with instant spectroscopic feedback, making it highly suitable for neural network driven, sustainable and scalable rapid prototyping and additive manufacture.

This AI driven project would thus be suitable for a highly motivated candidate with a strong physics/materials/engineering related background and programming abilities to develop highly transferable skills in cleanroom sample fabrication and electronic/photonic device characterisation, laser materials processing, numerical simulations and machine learning with input from industry partners and working with leading academic experts.
Using machine learning to develop a global vegetation phenology modelAI for the Natural EnvironmentThe prediction, in space and time, of vegetation phenological variables such as: time of onset of ‘greenness’, time of end of ‘greenness’, duration of the growing season, rate of ‘green up’ and rate of senescence can provide the information needed to increase understanding of the effects of climate change on vegetation and accurate estimation of climate-carbon feedback. Such phenological variables can be predicted from ground or remotely sensed data. There are still considerable uncertainty in predicting these phenological variables at global scale. Factors controlling these events can vary across geographical region and time, which makes it difficult to develop a universally applicable model. Availability of continuous observation from satellite and advances in machine learning approach provide a new opportunity to develop a location specific vegetation phenology model. Using an extensive global dataset derived from a combination of satellite, ground observation and meteorological observation over last 30 years and a machine learning/deep learning approach, the project will develop a model of vegetation phenology at a finer spatial resolution (~250m) across the globe. This would be a crucial input to many terrestrial bio-geochemical models, which currently lacks an accurate representation of phenology. Moreover, the model will be able to predict vegetation growth under future climate change scenarios and would improve estimation of carbon and energy budget.
AI-driven Trustworthy Energy Advice for End UsersAI for Sustainable Energy and BuildingsInvesting in cleaner renewable energy generation in domestic settings is highly challenging for end users. Installing solar PV, battery storage and/or switching to EVs is expensive and requires reasoning about long-term costs, high uncertainty and behaviour will affect both the return on investment and the environmental sustainability of the installation.

The main research problem in this project will be to explore how AI tools can help domestic end users in their switch to renewable energy, and to do so in a trustworthy manner.

This project will focus on several different aspects including:

- Optimisation of a renewable energy installation given the properties of a user’s home and historical consumption data.

- Lifetime monitoring and optimisation of the system, including automatic energy management (through heating, car charging and import/export of energy).

- Suggestion of behaviour interventions based on consumption data and limited interactions with the end user.

Trust is a key aspect of this – the project will therefore explore techniques for quantifying and uncertainty, explaining calculations and assumptions clearly to users. To enable this, it will involve running focus groups, surveys and field trials with users.

Another aspect is to consider possible incentive schemes to encourage behaviour modifications or demand response.

In terms of methodology, the project will combine the use of optimisation, machine learning (for demand / behaviour predictions, including under incentives) and aspects of explainable AI.
AI For Sustainable Gas Turbine DesignAI for Transportation and LogisticsEmissions created by a gas turbine engine throughout its life-cycle (including its design, manufacture, operation and disposal) can be significant but AI/ML approaches offer a means by which these can be significantly reduced. AI/ML driven design processes can improve engine efficiency or enable engines to take better advantage of alternative, low carbon, fuel sources such as SAF or hydrogen. AI/ML accelerated simulations can reduce the computational (and therefore energy) burden associated with the design process and reduce the need for physical testing. Accelerated simulations also enable the robustness of any design to through-life wear and damage to be greatly improved thereby extending engine life and reducing the need for spares and maintenance.
Building on existing work within the Rolls-Royce University Technology Centre for Computational Engineering this project aims to develop novel AI/ML approaches to gas turbine engineering. Topics of interest include, but are not exclusive to:
• Novel methods for generative design – leveraging generative AI to instantly provide a design solution to an engineer meeting all requirements and constraints.
• Reducing simulation cost within design optimisation – leveraging physics enhanced AI/ML to reduce the number/cost of simulations.
• Improving design robustness – developing novel AI/ML approaches to probabilistic engineering design.
• Improving maintenance decision making – leveraging AI/ML to enable better decision making during maintenance
Large Language Models for Habitat and Environmental Impact Event Extraction with Location RefinementAI for the Natural EnvironmentMonitoring risks to natural habitats is a critical element for protecting and sustaining biodiversity. Pollution, wildfire, flooding, appearance of invasive species and other habitat changing events are highly locallized and difficult to monitor manually. Often an event is observed well after damage to biodiversity has been done. Social media and Natural Language Processing offers opportunities to use AI to 24/7 review localized public posts from concerned citizens and volunteer environmentalists.

This PhD will explore event extraction using deep-learning based Large Langauge Models (LLMs) for social media posts about instances of habitat and environmental impact. Posts will be text-based (posted text + metadata) but will importantly include links to associated media content such as images and videos of mentioned locations and impact. Relation-Aware Prototyping [Meng 2023] will be coupled with ideas from work on LLM augmentation using OpenStreetMap [Manvi 2023] to provide a novel Information Extraction model that supports prototyping with hyper-local location refinement of events and event context. To reduce environmental impact of GPU-use for LLM training parameter-reduction methods such a QLoRA will be employed from the start.


The project will have an opportunity to engage with Kew Gardens as a potential end user partner.
Resource-Efficient Lifelong Robot LearningSustainable AIEquipping robots with the ability to learn a growing set of tasks over their operational lifetime—rather than focusing on mastering individual tasks—presents a significant challenge in robot learning. Lifelong learning robots often struggle with catastrophic forgetting when learning from changing input distributions, which causes the robot to forget old knowledge when learning new tasks. They are also expected to leverage previous knowledge to accelerate learning of new tasks without the need for complete retraining. This is often referred to as the stability-plasticity dilemma, where stability denotes the retention of old knowledge, and plasticity refers to the acquisition of new knowledge.

Recent advancements in lifelong and continual learning have proposed three primary strategies to address this dilemma: regularization, dynamic growth, and experience replay [1]. However, these methods typically demand high storage and computational resources, leading to increased energy consumption for data storage, processing, and transmission. Additionally, robots often face limitations in onboard resources, making it difficult to support lifelong learning outside controlled lab environments and to retain and integrate experiences from various environments and tasks. Although some recent approaches have shown promise in improving efficiency in continual robot learning [2, 3], they often come at the cost of reduced performance compared to single-task models, where each task is learned with a dedicated model.

This project aims to develop a continual on-device robot learning system that improves the trade-off between stability and plasticity while enhancing resource efficiency without compromising performance. The system will be designed for deployment on resource-constrained, non-networked robotic platforms and aims to contribute to sustainability by reducing carbon emissions through improved operational efficiency, including minimizing the need for frequent retraining and optimizing data handling processes.

[1] Wang, L., Zhang, X., Su, H. and Zhu, J., 2024. A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[2] Hafez, M.B., Immisch, T., Weber, T. and Wermter, S., 2023. Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning. Frontiers in Neurorobotics, 17, p.1127642.
[3] Schwarz, J., Czarnecki, W., Luketina, J., Grabska-Barwinska, A., Teh, Y.W., Pascanu, R. and Hadsell, R., 2018, July. Progress & compress: A scalable framework for continual learning. In International conference on machine learning (pp. 4528-4537). PMLR.
Advancing Sustainable AI Solutions for Refrigerated Goods TransportationAI for Transportation and LogisticsIn an era where global supply chains demand both efficiency and responsibility, there is a pressing need to integrate artificial intelligence (AI) into refrigerated goods transportation that aligns with the goals of sustainable development [1]. Research indicates that up to 30% of perishable goods can be lost during transit due to temperature variations and inefficient logistics. To address this challenge, we propose the implementation of holistic AI technologies designed to enhance the sustainability of refrigerated goods transportation.
The project will leverage a large dataset of refrigerator truck operations from DP World to develop an AI algorithm that integrates with a multi-objective optimisation framework, considering sustainability, profit, and preferences from human decision-makers. This dataset will be multi-disciplinary, involving refrigerator temperatures and the state of refrigerated food. This is expected to be a dynamic system where the AI algorithm will be deployed on live data to inform temperature regulation and route planning. Transparency in decision-making processes will be a cornerstone, ensuring stakeholders have visibility into AI operations and fostering trust in the evolving landscape of AI applications in logistics. This initiative aims to make a substantial contribution to minimising environmental impact due to food waste, energy consumption, and pollution emissions.


References
Toma, L., Revoredo-Giha, C., Costa-Font, M. and Thompson, B. (2020), Food Waste and Food Safety Linkagesalong the Supply Chain. EuroChoices, 19: 24-29. https://doi.org/10.1111/1746-692X.12254
Duret, S., Hoang, H.-M., Derens-Bertheau, E., Delahaye, A., Laguerre, O. and Guillier, L. (2019), Combining Quantitative Risk Assessment of Human Health, Food Waste, and Energy Consumption: The Next Step in the Development of the Food Cold Chain?. Risk Analysis, 39: 906-925. https://doi.org/10.1111/risa.13199
Piramuthu, S. and Zhou, W. (2016). Perishable food and cold-chain management. In RFID and Sensor Network Automation in the Food Industry (eds S. Piramuthu and W. Zhou). https://doi.org/10.1002/9781118967423.ch8
Mercier, S., Villeneuve, S., Mondor, M. and Uysal, I. (2017), Time–Temperature Management Along the Food Cold Chain: A Review of Recent Developments. Comprehensive Reviews in Food Science and Food Safety, 16: 647-667. https://doi.org/10.1111/1541-4337.12269
Meneghetti, A., & Ceschia, S. (2020). Energy-efficient frozen food transports: The refrigerated routing problem. International Journal of Production Research, 58(14), 4164-4181.
Route Design and Pricing: Workers' Choice in Green Last Mile DeliveryAI for Transportation and LogisticsThis research extends the traditional Vehicle Routing Problem (VRP) to better model delivery scenarios in the gig economy, where workers (e.g., riders or porters) are not employed directly by the company and may refuse jobs based on convenience or pricing. Unlike traditional VRP, which focuses on minimizing costs with an assumed fully available fleet, this approach accounts for workers’ preferences and the likelihood of them accepting delivery tasks.

The key challenge is that gig workers can choose whether or not to accept jobs depending on factors like price, convenience, and other platforms offering work. This shifts the problem from simply optimizing routes to balancing cost minimization with driver willingness to accept tasks. The solution involves learning the probability of acceptance via ML/AI models, which is influenced by factors such as pricing, route characteristics (e.g., preferred delivery locations, type of vehicle required), and personal preferences.

In this model, pricing decisions are made for each route to ensure a high probability that workers (with low-carbon modes of work such as parters or bikers) will accept the work. The goal is to minimize the overall cost and environmental impact of deliveries while ensuring routes are attractive to drivers by meeting a minimum acceptance probability across the fleet. The operational model combines cost efficiency with competitive driver compensation and preferences, ensuring routes are both practical and appealing. This will lead to a higher utilisation of the workers in the gig-economy, reduced carbon emissions, and overall more sustainable solutions both environmentally and economically.

Key differences from traditional VRP:

Pricing and cost balance: Each route is priced based on the attractiveness to workers, balancing minimizing company costs with maximizing route acceptance.
Worker preferences: Individual worker characteristics, such as preferred routes or vehicle types, if any, are considered.
Acceptance probability: The model ensures that each route has a sufficient chance of being accepted by a worker, considering their alternative options. This will be calculated with a data driven approach, considering heterogeneity of the worker population.
Gig economy dynamics: Unlike fixed-fleet VRPs, this model assumes drivers are not always available and can choose to work elsewhere if the job is unattractive.
Overall, this extended VRP model seeks to optimize delivery routes to reduce the carbon emissions and decrease operatoring costs, while being desirable to the workers in the economy.
Using AI to make road fleet operations more efficient, safe, and sustainableAI for Transportation and LogisticsRoad-based vehicle fleets are the cornerstone of modern-day transport and logistics systems, supporting a wide range of passenger and freight travel needs. From public buses and Heavy Goods Vehicles (HGVs) operating in interurban environments to taxis and cargo cycles serving dense city cores, fleets represent a sizeable proportion of traffic on the roads, and can therefore be attributed a considerable share of the resulting adverse impacts, such as congestion, accidents, energy consumption, pollution, and noise. Addressing these impacts at the source, i.e., at the individual vehicle and driver/operator level, can, therefore, deliver substantial benefits for the whole of the transport system.

Such an endeavour, however, has not been fruitful to date due to a prevailing lack of methods and tools aimed at understanding the effects of different vehicle- and driver-related parameters on the efficiency, safety, and sustainability of fleet operations. The aim of this project is, hence, to leverage the potential of big data and AI to obtain a clearer insight into such effects, including, for example, vehicle technical characteristics and driver/operator moods, preferences, and behaviours, and use these insights to improve existing operational and strategic policies.

To this end, we plan to utilise relevant data from existing large vehicle fleets to develop models that will be integrated into a prototype training platform to be used across different fleet operators in the UK and internationally. The models will integrate optimisation under uncertainty ( e.g., Markov decision processes ), preference/choice modelling analytics and operational research to embed meaningful decision support into the training platform and derive useful insights from the dataset.
Sustainable AI for Recycling (STAIR) AI for Sustainable Operations and Circular EconomyRecycling is defined as “any recovery operation by which waste materials are reprocessed into products, materials, or substances whether for the original or other purposes. It includes the reprocessing of organic material but does not include energy recovery and the reprocessing into materials that are to be used as fuels or for backfilling operations.”

Household waste recycling in the UK is a critical component of the country’s environmental strategy. In 2021, the UK achieved a recycling rate of 44.6% for waste from households, a slight increase from 44.4% in 20201. This rate varies across the UK, with Wales leading at 56.7%, followed by Northern Ireland at 48.4%, England at 44.1%, and Scotland at 41.7%.

Despite these efforts, challenges remain. For instance, a survey in 2024 revealed that UK households discard an estimated 90 billion pieces of plastic annually, with only 17% being recycled domestically. This highlights the need for improved recycling infrastructure and public awareness campaigns. There is still significant room for improvement in the UK’s household waste recycling efforts to reduce landfill use and promote sustainability.

Artificial Intelligence (AI) has the potential to significantly enhance the effectiveness and efficiency of recycling household items. AI-powered systems could be designed to accurately identify and sort recyclable materials. Traditional recycling methods often rely on manual sorting, which can be time-consuming and prone to errors. AI technologies, such as computer vision and machine learning, can quickly and accurately distinguish between different types of materials, ensuring that recyclables are correctly sorted. This reduces contamination and increases the quality of recycled materials.

Robotic systems equipped with AI could automate a recycling process, from collection to sorting. These robots can work around the clock, increasing the throughput of recycling facilities. Automation also reduces the need for human labour in hazardous environments, improving worker safety

The focus of this research area is to explore how to solve automated separation of recyclables. One of the themes is on control strategies, for example, AI empowered control strategy [1,2].

This PhD will develop and tests controllers that can quickly and accurately pick up complex objects with poorly defined material properties, solving major weaknesses in current controllers [3]. Control approaches will be based on an understanding of human motor control and will employ a set of a priori internal dynamic models to plan and execute each stage of the movement. It will also embed learning mechanisms to rapidly update the internal model and controller using all available sensory information from current and past grasp attempts. The learning mechanisms using all available sensory information is also key element of this project, which will involve image recognition technologies and multimodal data modelling, fusion, and learning. We would also like to explore how generative AI could help with enriching the multimodal data to make the learning and control more accurate [4, 5].

[1] Hodson, (2018) How robots are grasping the art of gripping. Nature. 557.7706: S23-S23
[2] Kleinholdermann et al., (2013) Human grasp point selection, J. Vision. doi:0.1167/13.8.23
[3] Ozawa and Tahara (2017) Advanced Robotics, doi:10.1080/01691864.2017.1365011
[4] Aristeidou et al., (2024) Generative AI and neural networks towards advanced robot cognition, CIRP Annals, Volume 73, Issue 1, Pages 21-24, ISSN 0007-8506, https://doi.org/10.1016/j.cirp.2024.04.013.
[5] Ma (2024) Transforming the Future of AI and Robotics with Multimodal LLMs. ARM Newsroom, Accessed on 22 Sep 2024. URL: https://newsroom.arm.com/blog/llms-and-autonomous-robots
Reducing the carbon intensity of AI utilisationSustainable AIMotivation
The rapid growth of AI technologies has led to significant advancements across various sectors. However, the computational power required for training and deploying AI models has resulted in substantial energy consumption and carbon emissions. This PhD research study aims to address the environmental impact of AI by exploring methods to reduce its carbon footprint, contributing to global sustainability efforts, and aligning with climate change mitigation goals.

Main Research Problem and Research Questions
Research Problem: How can the carbon intensity of AI use be minimized without compromising performance, efficiency and utility?

Study aims – To:
• Identify and quantify the primary sources of carbon emissions in the lifecycle of AI models.
• Identify, test, and quantify how AI model architectures may be optimized to reduce energy consumption (and hence carbon intensity) whilst optimising performance.
• Critically evaluate the role hardware and data centre efficiencies play in minimizing AI’s carbon footprint.
Ideas for objectives for this aim:
• Review of how renewable energy could sources be integrated into AI operations.
• Quantify and review trade-offs between AI performance and carbon intensity.

Suggested Methods
1. Literature Review: Conduct a comprehensive review of existing research on AI’s carbon footprint, energy-efficient AI models, and sustainable computing practices. Look at case studies on organizations that have implemented carbon-reducing strategies in their AI operations.
2. Data Collection: Gather data on energy consumption and carbon emissions from various AI models and data centres. (This may not be possible?). Explore the feasibility and impact of using renewable energy sources for AI operations.
3. Model Optimization: Develop and test energy-efficient AI model architectures, such as sparse models and low-precision computations (others?).
4. Hardware Analysis: Evaluate the impact of different hardware configurations (e.g., Tensor Processing Units, Graphics Processing Units) on energy consumption and performance (others?).
5. Simulation and Analysis: Use simulation tools to model the carbon impact of different AI deployment scenarios and analyse any trade-offs between performance and carbon intensity.

References:
1. Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60. https://doi.org/10.1016/j.futures.2017.03.006.
2. Dhar, P. The carbon impact of artificial intelligence. Nat Mach Intell, 2, 423–425 (2020). https://doi.org/10.1038/s42256-020-0219-9.
3. Arya, A., Bachheti, A., Bachheti, R.K., Singh, M., Chandel, A.K. (2024). Role of Artificial Intelligence in Minimizing Carbon Footprint: A Systematic Review of Recent Insights. In: Chandel, A.K. (eds) Biorefinery and Industry 4.0: Empowering Sustainability. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-51601-6_14.
4. Luers, A., Koomey, J., Masanet, E., Gaffney, O., Creutzig, F., Ferres, J. L., & Horvitz, E. (2024). Will AI accelerate or delay the race to net-zero emissions? Nature. https://www.nature.com/articles/d41586-024-01137-x.
5. Gibney, E. (2022). How to shrink AI’s ballooning carbon footprint. Nature. https://www.nature.com/articles/d41586-022-01983-7.
6. Zhao, P., Gao, Y., Wu, M. et al (2024). How artificial intelligence affects carbon intensity: heterogeneous and mediating analyses. Environ Dev Sustain. https://doi.org/10.1007/s10668-024-05085-4.
Collective Perception Using Low-cost CamerasAI for the Natural EnvironmentJointly collecting and sharing information among members of a group is a classic problem in swarm robotics---typically with the aim that a coherent shared collective perception emerges within the swarm. In this project the student will explore a related challenge: equipping numerous mobile---and potentially freely acting---agents (drones, humans, robots, vehicles, animals, etc.) with sensors and synthesising from the individual data streams a macro perspective.

Recent progress in technology (e.g. ultra-low power cameras) makes it possible for small sensors to hitchhike along mobile hosts for extended periods and gather information from the environment with good spatial and temporal resolution.

To achieve its aim, this project will develop both low-cost embedded sensor modules and the concomitant software tools for processing and visualisation. We will aim at real world applications (Geography, Biology, Environmental Science)---but there is also plenty of scope for blue sky exploration (e.g., cat collars that map air pollution). In accordance with the skills and interests of the student, the project can be scoped more towards the hardware side (using conventional state-of-the-art data analysis/visualisation) or more towards the software aspects (using of-the-shelf embedded hardware). Depending on the application scenario privacy and security can be important and could be included.

We foresee a twofold outcome: Firstly a general framework for information gathering with a swarm of heterogeneous agents---some or all of which may act freely and do not follow instructions. Secondly, a real-world implementation that employs the framework to deliver new insights for a specific application domain. Together these outcomes should provide a new instrument for environmental research together with a case study that demonstrates its use.

Relevant Publications
• Berlinger, F., Gauci, M. and Nagpal, R., 2021. Implicit coordination for 3D underwater collective behaviors in a fish-inspired robot swarm. Science Robotics, 6(50), p.eabd8668.
• Soorati, M., Clark, J., Ghofrani, J., Tarapore, D. and Ramchurn, S.D., 2021. Designing a user-centered interaction interface for human–swarm teaming. Drones, 5(4), p.131.
Using Aerial Swarms for Wildfire Suppression AI for the Natural Environment
Wildfire burns approximately 420 million hectares of land each year, which has a catastrophic ecological and economic effect around the world [Giglio et al., 2018]. Current wildfire suppression methods are not responsive during the early stages of a fire and are less effective once the fire intensifies. Furthermore, the reliance on manual intervention increases the risk to firefighters, as existing methods lack automation and require direct human involvement. An alternative approach is to use human-in-the-loop deployment of an aerial swarm that can quickly respond to early identification of incipient-stage fires and suppress the fire before it develops. This project aims to develop a swarm distribution and path planning algorithm that can distributedly find the optimum task allocation and path planning for the swarm to have the maximum effect on the spread of the fire. The student will use multi-agent inverse reinforcement learning to follow the path identified by the human operator [Agunloye et al., 2024], who oversees the overall execution and accepts or rejects plans based on the overall mission strategy. Path planning must take into account environmental limitations, including wind, smoke, and limited communication, which may hinder coordination among the aerial robots. The project will focus on designing a shared representation of the environment that also accounts for the limited communication payload and the uncertainty of connection links between the agents [Kelly et al., 2024].

The project can be divided into three main objectives: 1) The robots will collectively build the fire spread model using incoming spread maps from other agents, which will be validated and combined with the robots' internal models. 2) The collective will allocate each potential fire area to the agents and plan a path that leads to effective suppression of the fire. 3) The coordinated plan will be adaptive to human intervention, converting human suggestions into concrete action plans. The robots will iterate the process of updating the fire spread model and suppression planning until all fire instances are extinguished. The performance of the swarm will be validated against existing models to evaluate their effectiveness and efficiency, and a user study with qualitative and quantitative metrics will assess firefighters’ perceptions of the mission performance as well as their trust in the system's behavior.

Giglio, L., Boschetti, L., Roy, D.P., Humber, M.L., and Justice, C.O. 2018. The Collection 6 MODIS burned area mapping algorithm and product. Rem. Sens. Environ. 217: 72–85. doi:10.1016/j.rse.2018.08.005.

Agunloye, A. O., Ramchurn, S. D., & Soorati, M. D. (2024). Learning to Imitate Spatial Organization in Multi-robot Systems. arXiv preprint arXiv:2407.11592.

Kelly, T. G., Soorati, M. D., Zauner, K. P., & Ramchurn, S. D. (2024). Trade-offs of Dynamic Control Structure in Human-swarm Systems. arXiv preprint arXiv:2408.02605.
Artificial intelligence for resilient complex transportation networkAI for Transportation and LogisticsComplex dynamical systems are widely existing in multiple disciplines from protein folding to brain dynamics from complex networks to galaxies. They present structures such as equilibrium points, periodic and libration solutions and resonances that are of paramount importance to understand and control the evolution of these systems. Being able to identify these structures is fundamental to ensure the resilience of national infrastructures, manufacture effective medication, understanding, and curing disease, studying the evolution of climate on Earth, preserving our ability for space exploration etc. The state-of-the-art machine learning models for dynamical systems mainly work well for linear systems but are all limited in some ways for complex nonlinear systems because they are either replying on the equation of motions, particular types of observables or unable to work for strong nonlinear systems with multiple fixed points and complex modal interactions [1]. It is essential to address the fundamental limitations of state-of-the-art machine learning models in the identification and prediction of the behaviours of complex and high-dimensional dynamical systems.

The project will specifically focus on physical-based transportation complex dynamic networks such as airports and space traffic control. These networks have numerous interrelations and interdependencies among physical infrastructure and between physical and digital networks. They evolve over time based on local processes that define the system's collective properties. To prevent large-scale collapses of transportation systems and develop efficient reaction strategies, it is essential to identify crucial or weak elements of the network and understand the progressive damage caused by successive node removals or failures. Agent-based models, commonly used for complex network modelling as a powerful microscopic modelling approach [2], fully incorporate stochastic effects and simulate complex dynamical processes. However, as the number of parameters grows, it becomes difficult to discern the impact of specific modelling assumptions or parameters. Additionally, performing backward sensitivity analysis is often challenging due to the large number of parameters and dynamic rules incorporated. The project will leverage the recent advances in AI technology combining the probabilistic theory such as Bayesian inference to develop a new paradigm to automatically and robustly discover this complex dynamical system, simulate temporal evolution, make inferences with unknown network dynamics and solve high dimensional combinatorial optimisation problems [3]. The project will particularly commit to improving the robustness and resilience of the dynamics of these complex transportation networks, ultimately contributing to the overarching goal of achieving a net-zero and sustainable future.

[1] Cenedese, M., Axås, J., Bäuerlein, B., Avila, K., & Haller, G. (2022). Data-driven modeling and prediction of non-linearizable dynamics via spectral submanifolds. Nature communications, 13(1), 872.
[2] Barrat, A., Barthelemy, M., & Vespignani, A. (2008). Dynamical processes on complex networks. Cambridge university press.
[3] Ding, J., Liu, C., Zheng, Y., Zhang, Y., Yu, Z., Li, R., ... & Li, Y. (2024). Artificial Intelligence for Complex Network: Potential, Methodology and Application. arXiv preprint arXiv:2402.16887.
Large area 2D semiconductor platformsSustainable AIMoore’s Law is currently being challenged with Nvidia CEO recently claiming it is dead. The scaling of transistors cannot continue due to physical limitations of silicon posing a threat to the sustainable evolution of new technologies. 2D semiconductors offer the solution as they can be scaled to the molecular level and create in-memory computing components one of the key elements for neuromorphic computing the hardware that will support the next generation of artificial intelligence.
The project aims to create a revolutionary semiconductor platform using 2D materials to unlock the ultimate limit in miniaturisation of semiconductors. You will benefit from state-of-the-art custom large area 2D equipment not available anywhere else.
If you like learning and applying novel concepts using the latest technology, you will certainly enjoy working with us. During your PhD studies you will have the opportunity to learn how to design, fabricate and characterise materials and devices for integrated electronics and photonics at the cutting edge of research. The project includes a development plan, with freedom to innovate in both material and device design domains. In addition to field specific skills, the University of Southampton training and mentoring programmes will provide training in report writing, project management, time management, presentation skills, and safety, all of which are applicable to future academic or industrial employability.
We are looking for a passionate candidate excited about the latest developments in technology. You will need a background in physics, chemistry, engineering, electronics or a related discipline. A basic level of understanding semiconductor physics, photonics and material science are essential, and we will support you to expand in all these subjects. Experience with experimental work in either electronics, physics, optics or photonics, and computer modelling and/or programming languages are also desirable.
The University of Southampton is committed into sustaining an inclusive environment for all students and staff. The University holds an Athena SWAN Silver Award and works continuously to improve equality in the workplace and encourage a work-life balance. Our focus is on the development and progression of our students and researchers, and we achieve this through a friendly supportive environment.
Novel Materials for Neuromorphic ApplicationsSustainable AIThe current increase in data generation is expected to reach unsustainable rates by the end of the decade. This has a strong impact on the environment and therefore new solutions are sought after. In addition, specific applications such as image recognition and lidar are more efficiently processed in the light domain. Integrated photonics have the inherent ability to support a much larger data density than electronic solutions. Advanced reprogrammable photonic materials enable neuromorphic based computation a key component to efficient artificial intelligence. Our work is to build the most efficient components by developing the next generation of advanced materials to achieve sustainability in AI applications.
In collaboration with an EU consortium, we work to create a reprogrammable neuromorphic photonic platform for a variety of applications from telecommunications to biosensing. If you enjoy developing new technologies and applying novel concepts, you will enjoy working with us. Our facilities are unique in the UK and will be available to develop your skills in the design, characterisation, optimization, and experimental application of novel materials and devices. You will have the opportunity to optimise the processes and materials and thus be the first in the world to use them.
We are looking for a passionate candidate excited about the latest developments in technology. You will work in a multidisciplinary, motivating and supportive environment. You will need a background in physics, chemistry, engineering, electronics or a related discipline. A basic understanding of semiconductor physics, photonics and material science are essential, and we will support you to expand in all these subjects. Experience with experimental work in either electronics, physics, optics or photonics, and computer modelling, programming languages are also desirable.
The University of Southampton is committed into sustaining an inclusive environment for all students and staff. The University holds an Athena SWAN Silver Award and works continuously to improve equality in the workplace and encourage a work-life balance. Our focus is on the development and progression of our students and researchers, and we achieve this through a friendly supportive environment.
AI for Sustainable Operations and Circular Economy: The Case for Electronics Waste Supply ChainAI for Sustainable Operations and Circular EconomyIntroduction
The increasing levels of electronic waste (e-waste) and the environmental challenges posed by traditional electronic waste supply chains highlight the need for more sustainable practices. A circular economy (CE) offers a promising solution by promoting resource reuse, remanufacturing, and recycling, reducing the environmental impact. However, implementing circular economy practices in industries like electronics requires sophisticated tools for decision-making, negotiations, and policy evaluation. This research proposes the development of AI-driven intelligent agents to support decision-making and negotiation processes in the circular economy, automating complex tasks while keeping managers in the loop.
Research Aim
The primary goal of this research is to develop intelligent agents capable of analyzing various circular economy options and negotiating on behalf of firms to automate the negotiation process while integrating managers' preferences for cost reduction and profitability. The agents will help evaluate circular economy options, handle contract negotiations, and assess the long-term efficacy of policy-level incentives.
Research Questions
The research will focus on the following key questions:
1. CE Option Evaluation: How can algorithmic game theory be applied to evaluate different circular economy options (e.g., recycling, remanufacturing, and reusing materials) in the electronics waste and automobile supply chain? (The research will develop algorithms that enable intelligent agents to assess multiple circular economy options by evaluating costs, revenues, and sustainability outcomes in real-time.)
2. Contract Implementation: How can bargaining games be used to model negotiations for cost- and revenue-sharing agreements in circular economy practices, accounting for seasonality in waste reuse? (The research will focus on creating AI models capable of mimicking circular economy negotiations, allowing firms to develop smart contracts that adjust based on changes in supply chain dynamics and waste availability.)
3. Policy-level Incentive Mechanisms for CE: How can agent-based simulations be employed to evaluate the long-term efficacy of policy-level incentives for promoting circular economy practices?
(This question explores the potential for AI-driven simulations to test various incentive mechanisms and their effectiveness in encouraging companies to adopt sustainable practices in the electronics waste and automobile sectors.)
Methodology
The research will employ a combination of algorithmic game theory, bargaining games, and agent-based modeling to create intelligent systems that automate key processes in circular economy implementation. The methodology will be divided into three main stages:
1. Development of Intelligent Agents: AI agents will be designed to evaluate CE options, negotiate contracts, and assess policy incentives. These agents will use machine learning and algorithmic game theory to make data-driven decisions while incorporating managerial preferences for cost reduction and profit thresholds.
2. Simulation of Circular Economy Negotiations: Using bargaining games as the modeling framework, the agents will simulate real-world negotiations for cost- and revenue-sharing in the electronics waste and automobile supply chains. Smart contracts will be developed to handle the seasonality of waste reuse, ensuring flexibility in contract terms.
3. Policy Evaluation through Agent-Based Simulation: Agent-based models will be used to simulate long-term scenarios in which various policy incentives (e.g., tax breaks, subsidies, and penalties) are introduced to promote circular economy practices. The models will evaluate the efficacy of these policies in driving sustainable behavior within the supply chain.

Note that, at this stage, the research is focused on the electronics waste supply chain. However, depending on future data availability, the specific supply chain of focus may be adjusted accordingly.
Advancing sustainable and resilient marine fisheries for the future using AIAI for the Natural EnvironmentSustainable development is traditionally viewed in relation to three fundamental pillars that represent the social, economic and environmental domains. This approach has been criticised for failing to acknowledge the intricate and interconnected interactions between the pillars and the dynamic nature of the systems in which they function. Instead, silos are reinforced and governance fragmented1, leading to failure to address complex challenges2. To manage natural resources in a more sustainable way a systems-based approach is now needed needed to identify and capitalise on synergies that may exist between sectors, while minimising or even avoiding trade-offs that result in negative consequences3.
The long-term overexploitation of marine fish stocks4 has rendered them poorly able to buffer, adapt and recover from additional stressors and shocks (e.g. policy, economic, climate). Consequently, the fisheries and fishing communities the fish populations support are also imperilled and lack long-term resilience. Acknowledging that fisheries are social-ecological systems5 in which the fish stocks themselves form the foundations on which economic prosperity and wellbeing of fishing communities depends, future management should adopt a systems dynamics-based approach to enhance sustainability and resilience.
Focusing on marine fisheries, this interdisciplinary project will use AI to investigate how fishing communities responds to a series of shocks so that lessons may be learnt on how resilience of the resource and the people it supports may be reinforced.
(1) Creating a conceptual systems dynamics model of case-study marine fisheries using causal loop and stocks and flows diagrams and where appropriate build a systems model to test assumptions.
(2) Investigating and quantifying fishing activity over time-scales that cover the period of the shock. AI will be used to enhance the efficiency of interrogating large data-sets available, such as that provided by employing: (a) Automated Identification Systems (AIS) that track fishing vessel activity; (b) geospatial remote sensing satellite data of fishing vessel abundance (e.g. operating in less developed regions of the world where AIS data is unavailable); and social media data for inference when direct evidence is unavailable and difficult to obtain (e.g. for very small vessels or recreational boats).
(3) Exploring shifts in catch from available data on landings for specific species and vessel types.
(4) Obtaining information related to the status of specific fish stocks.
(5) Correlating measures of activity and catch / stock status with metrics that document the consequences of shocks.
The study will help improve understanding of how those dependent on marine fisheries to supply food, respond and adapt in the face of systemic shocks, enabling complex feedback loops to be evaluated and trade-offs, synergies and unintended consequences to be identified. Lessons will be learned on how to enhance the resilience of primary resources and the communities that depend on them in the face of ongoing existential threats, such as climate change.
References:
1Bogers, M., Biermann, F., Kalfagianni, A., Kim, R. E., Treep, J. and Vos, M. G.. 2022. The impact of the Sustainable Development Goals on a network of 276 international organizations. Global Environmental Change 76, DOI: 10.1016/j.gloenvcha.2022.102567.
2UN 2023. Halfway to 2030, world ‘nowhere near’ reaching Global Goals, UN warns https://news.un.org/en/story/2023/07/1138777
3Kemp, P. S., Acuto, M., Larcom, S., Lumbroso, D. and Owen, M. 2022. Exorcising Malthusian ghosts: Vaccinating the nexus to advance integrated water, energy and food resource resilience. Current Research in Environmental Sustainability, 4, [100108]. DOI:10.1016/j.crsust.2021.100108).
4Thurstan, R., Brockington, S. & Roberts, C. The effects of 118 years of industrial fishing on UK bottom trawl fisheries. Nat Commun 1, 15 (2010). https://doi.org/10.1038/ncomms1013
5O
Robot swarms for a digital forest health monitoring systemAI for the Natural EnvironmentMotivation: Forests, covering nearly one-third of the global land surface, are crucial for regulating carbon, water, and energy cycles, mitigating climate change, and supporting biodiversity. They provide critical economic, health, and other benefits. However, forests face increasing stress from climate change and human activities, impacting their ability to provide ecosystem services. Accurate forest health information is essential for understanding ecosystem stability and developing mitigation strategies.

Research Problem: The main challenge in large-scale forest health monitoring lies in the measurement gap between satellite/airborne laser scanning (ALS) data, which provides low-resolution canopy-level information, and the actual conditions under and within the canopy. Many aspects of forest health can only be determined from detailed ground-based measurements, which are currently manual and offer limited spatial and temporal coverage. Traditional methods, such as measurement tapes, hypsometers, clinometers, and tripod-mounted terrestrial laser scanners (TLS), are accurate but cumbersome and time-consuming. Mobile laser scanning (MLS) instruments are more portable but still require human operation. These methods focus on a limited set of forest parameters, either structural (e.g., trunk diameter, tree height) or functional (e.g., leaf chlorophyll content, water content), and rarely measure them together, resulting in an incomplete picture of forest health.

Methodology: The proposed solution involves developing robot swarm technology to automate ground observations of forest structural and radiometric properties. Robot swarms, composed of low-cost portable robots, can coordinate to gather environmental data across large areas. Their low unit cost and decentralized coordination strategy allow for scaling up the swarm size to scan large forest areas using divide-and-conquer strategies. The portable nature of the robots enables easy deployment and recovery, minimizing their impact on the forest floor. These robots can navigate difficult terrain and quickly scan large areas. Preliminary experiments with a remote-controlled prototype swarm rover demonstrated the ability to drive through a typical 20m x 20m forest plot in about six minutes. Innovative AI algorithms for 3D data analysis will be applied to convert these co-located individual measurements into key forest parameters, focusing on autonomy in data collection and processing. This approach aims to provide a novel data stream of high-resolution measurements (cm to m) and the ability to measure multiple forest parameters simultaneously, offering a comprehensive picture of forest health.

References.
Tarapore, D., Groß, R. & Zauner, K. P. Sparse Robot Swarms: Moving Swarms to Real-World Applications. Front Robot AI 7, (2020).

Niu, C., Zauner, K. P. & Tarapore, D. End-to-End Learning for Visual Navigation of Forest Environments. Forests 14, (2023).

Niu, C., Tarapore, D. & Zauner, K. P. Low-viewpoint forest depth dataset for sparse rover swarms. In IEEE International Conference on Intelligent Robots and Systems (2020).

Brown, Luke A., et al. "Near-infrared digital hemispherical photography enables correction of plant area index for woody material during leaf-on conditions." Ecological Informatics 79 (2024): 102441.

Brown, Luke A., et al. "Hyperspectral leaf area index and chlorophyll retrieval over forest and row-structured vineyard canopies." Remote Sensing 16.12 (2024): 2066.
Development a decision support tool for offshore wind foundation installationAI for Sustainable Operations and Circular EconomyThe goal of this project is to develop a mechanism-based machine learning algorithm to optimise the installation of offshore wind turbine foundations and shape the decision-making process during installation trained using a large database of field data.

In the UK alone, thousands of anchors and foundations must be installed every year to support the (floating) offshore wind turbines necessary to achieve the net zero objectives. Piles are currently the most commonly used foundation type and pile driving is the main installation method. However, this pile hammering creates loud underwater noise which is particularly detrimental to marine mammals. On the contrary, suction piles are installed by pumping water on the inside of the pile, which creates a suction effect that pushes the pile into the ground without any impact noise.

The installation of suction piles is challenging. Premature refusal, which is the termination of the installation before reaching the targeted depth, may require the full removal of the pile before its relocation, which has a significant (carbon) cost. Refusal can be due to mechanisms such as piping (breaking the internal suction) or plug heave (internal soil upwards movement), but they are not yet well predicted, especially in challenging geologies such as hard or layered soils. Innovative techniques, such as suction cycling, have been recently introduced to overcome those issues, but their effect is not yet predictable.

The goal of this project is to develop a mechanism-based machine learning algorithm to optimise the installation of suction piles, in terms of site selection or installation parameters (pumping sequence) to reduce installation time and risk. This goal will be achieved by meeting the following research objectives:

1. Identify the different failure mechanisms encountered during suction pile installation and determine their geotechnical features.
2. Develop a mechanism-based predictive model for suction pile installation that accounts for uncertainties in soil properties and provides model predictions that enable live decision making in the field.
3. Propose new installation strategies and quantify their effect on suction pile installation.

This work will exploit a large database of field results, including failed installations. This database contains time-series of recorded information during installation (pressure, depth, tilt, flow rate), all of which sometimes have cycles overlain, as well as associated depth profiles of ground properties acquired during the site investigation. The project will be split into the following tasks:

WP1 (data pre-processing): Interpretation of available data and identification of the most relevant physical mechanisms during suction pile installation. Quantification of uncertainties in soil properties.
WP2 (ML algorithm): Development and calibration of a suitable machine learning algorithm that is constrained by the identified physical mechanisms, against field tests and available literature. Estimation of uncertainties in the prediction of refusal due to model inaccuracies and soil properties.
WP3 (Innovation): Identification of potential optimal installation strategies to accelerate and/or improve the reliability of suction pile installation. Extension of the predictive model to account for those innovations.
Scroll to Top