# Positions

## Meet the research team

## Project details

- PhD 1a
- PhD 1b
- PhD 1c
- PhD 2a
- PhD 2b
- PhD 2c
- Postdoc 2d
- Postdoc 2e
- PhD 3a
- PhD 3b
- PhD 3c
- PhD 4a
- PhD 4b

##### Title:

Towards hybrid model formulations: data-driven solutions exploiting constitutive equations

##### Supervision:

Prof.dr.ir. M.G.D. Geers (TU/e-MoM), Dr.ir. R.H.J. Peerlings (TU/e-MoM), Dr.ir. J.P.M. Hoefnagels (TU/e-MoM)

##### Scientific challenges

Existing models often fall short in their predictive capabilities. Consistency with the growing amount of data sets at different scales is missing. Integration of data with models, targeting improved predictions, is called for. Project 1a targets the development of a thermodynamically consistent data-driven approach. For this purpose, use will be made of data sources of different nature, i.e. both experimental and model-based data. The literature has made significant steps in establishing data-driven approaches which do not lean on any interpretation of the data, which also account for thermodynamical consistency. This is the point of departure for this project.

##### Methods

The project will focus on 2 main objectives, defining the corresponding methods:

- Establish a multiscale data-driven solution procedure, making use of RVE-data and experimental data only, while taking into account recent developments for history-dependent data-driven solvers. Since data emerging from RVEs originate from models, the data are not expected to be quantitatively exact, but it incorporates the dependency of the material response on various microstructural parameters. The construction of a data manifold, combining small-scale simulation models with experimental data at both scales, is therefore an important task as well. This data manifold will be extended with a data fidelity measure. that will be integrated in the data-driven minimization procedure, where weighted functionals will be constructed for the minimization.
- Expand the data manifold towards hybrid data-constitutive manifolds, exploiting the concept of a constitutive manifold, that is built both around existing constitutive equations and experimental data, incorporating fidelity measures for each. The fidelity measures for each data source may differ significantly, depending on the process to be simulated at the macro-scale. The advantage of this approach is that constitutive equations may still steer the solution in data-poor regions. Moreover, all known dependencies (time, temperature, strain rate, prior history, composition, anisotropy, etc.) in existing constitutive equations may be used in the solution where the data falls short or tends to be incomplete.

One of the main challenges is to interweave the dependence on various process and microstructural parameters (composition, anisotropy, work hardening, etc.). Such effects can be well studied at the RVE level and hence integrated in the multiscale data-driven approach.

##### Title

Physically consistent data-driven constitutive models by machine learning

##### Supervision:

Dr.ir. E.S. Perdahcioglu (UT-NSM), Prof.dr.ir. A.H. van den Boogaard (UT-NSM)

##### Description

This project’s goal is to develop highly efficient and predictive material models that can be used in simulation of forming processes. The key aspect of these models will be to incorporate physics-based information relating to microstructural features and fluctuations in the composition of the material. Machine learning approaches will be investigated to build a model that will be trained using data from crystal plasticity simulations and will be thermodynamically consistent thanks to a physics-based architecture.

##### Scientific challenges

Existing models that represent elasto-plastic behaviour of materials inherently obey thermodynamical and physical constraints. They have however been developed on the basis of the intuition of researchers in interpreting data observed by experimentation. In order to capture the effects of scatter in material composition and process conditions, enriched models that are sensitive to this higher level of microstructural detail are needed.

The data points that will be used in interpreting constitutive manifolds, either experimental or synthetic (micro- and macro-scale), intrinsically obey the aforementioned fundamental constraints. Moreover, for the synthetic data the path-dependency is also known. Casting them in the form of novel mathematical formulations using a machine learning approach will not only improve the reliability and stability but also the accuracy with the respect to the size of the given data set. This approach differs from a purely data-driven (model-free) methodology, even when thermodynamical consistency is accounted for, since the outcome of the model is not purely an interpolative surrogate but a collection of constitutive manifolds.

##### Methods

The main focus of the project can be summarized as follows:

- Identify an optimal set of variables and parameters to build the constitutive manifolds. In the synthetic (RVE) data the history of mechanical variables and their distribution are available which makes it a challenge to reduce the vast tensorial data into its most information rich but computationally manageable form.
- Building constitutive manifolds. By keeping the model sufficiently free the full potential of machine learning strategies will be exploited. The synthetic data will be interpreted using the identified set of variables in a form where the most characteristic features are selected, constituting the basis of the manifolds.
- Define a minimal but most inclusive set of physical constraints to be introduced to the machine learning model. Define a minimal but most inclusive set of physical constraints to be introduced to the machine learning model. The constructed bases have to obey these constraints so that even for scarce training data the tensorial predictions will not violate well-established thermodynamical and constitutive principles.
- The identified (several) manifolds will be assembled in order to yield the full constitutive behaviour that relates the known material information to its expected mechanical response for arbitrary strain paths.
- By further training with experimental data, the fidelity of the model will be increased and predictions based on limited experimental information on real materials for complicated strain paths will be validated.

Correlations between microscopic factors (composition, grain size, texture etc.) and observed macroscopic mechanical phenomena (different stages of hardening, Bauschinger effect etc.) will be resolved. The challenge is in interweaving multiple data-driven manifolds with differing length scales while incorporating experimental data as a fidelity measure.

This task will also take benefit of the hybrid data-drive approach adopted in project 1a.

##### YOUR PROFILE

- A MSc. degree in Computational mechanics, Computational materials science, Mechanical engineering, Applied physics, Data science or a related field with excellent grades.
- Special interest in modelling of production processes.
- A background in nonlinear solid mechanics, computational methods, material science and/or data science.
- Strong programming skills.
- A high degree of responsibility and independence.
- Strong communication skills for effective academic and industrial collaboration.
- Proficiency in English is required, both spoken and written (IELTS minimum score 6.5 or TOEFL-iBT minimum score 90).

##### OUR OFFER

- A dynamic and international environment, combining the benefits of academic research with a topic of high industrial relevance;
- excellent working conditions in an exciting scientific environment, and a green and lively campus;
- a fulltime 4 year PhD position;
- excellent mentorship and facilities;
- a professional and personal development program within Graduate School Twente;
- a starting salary of € 2.541 in the first year and a salary of € 3.247 in the fourth year gross per month;
- a holiday allowance of 8% of the gross annual salary and a year-end bonus of 8.3%;
- minimum of 29 holidays per year in case of fulltime employment;
- full status as an employee at the UT, including pension and health care benefits.

##### INFORMATION AND APPLICATION

Please submit your application **before January 13th, 2023** using the “Apply now” button, and include:

- curriculum vitae
- letter of motivation
- grades of the BSc and MSc courses
- IELTS or TOEFL score
- contact information of 2 references

The intended starting date is between March and June 2023.

For more information you can contact: Prof. Ton van den Boogaard, head of the chair Nonlinear Solid Mechanics, phone: +31 (0)53 489 4785, e-mail: a.h.vandenboogaard@utwente.nl,

Dr. Semih Perdahcioglu (**position 1b**), phone: +31 (0)53 489 2675, e-mail: e.s.perdahcioglu@utwente.nl, Dr. Jos Havinga (**positions 2a and 2b**), phone: +31 (0)53 489 6869, e-mail: jos.havinga@utwente.nl.

First (online) interviews will be held on January 24th and 27th, 2023.

A Game-Based assessment will be part of the selection procedure.

##### Title:

Bridging synthetic RVE data to micromechanical tests for efficient data-coupled models

##### Supervision:

Dr.ir. J.P.M. Hoefnagels (TU/e-MoM), Prof.dr.ir. M.G.D. Geers (TU/e-MoM), Dr.ir. R.H.J. Peerlings (TU/e-MoM)

##### Scientific challenges

RVE-based multiscale modelling methods are nowadays often used for accurate predictions of the engineering-scale mechanical response without the need for (calibration of) empirical macroscale constitutive equations. RVE-type

calculations can be used for many different loading conditions and/or microstructural variations in order to numerically generate large sets of complete data (i.e. the full, loading-specific tensorial 3D stress and strain fields). Yet, the quality of this synthetic data, i.e. the numerical predictions of the applied micromechanical models, is only as good as the

ingredients of the RVE, i.e. the detailed three-dimensional (i.e. sub-surface) morphology of the multiphase microstructure as well as the constitutive response of each of the crystallographic phases present in the RVE. The latter

is especially hard to determine accurately. Data consistency and fidelity is therefore a major concern. Direct confrontation of RVE-based simulations against in-situ micromechanical test in the literature consistently reveals large

discrepancies in the location and magnitude of the microscopic deformation patterns (e.g. the slip bands).

The goal of DEPMAT lies in the accurate prediction of the macroscale behaviour, i.e. only the averaged mechanical response of the RVE is of prime interest. This still requires, however, that all essential 3D microstructural constituents and ingredients should be included in the RVE, while their micro-mechanical response should be modelled with adequate accuracy. This exposes key challenges to all data-driven approaches that are based on synthetic RVE-type data sets. How to build RVE’s with a minimal level of microstructural detail that finds the optimum between accuracy and computational cost? How to validate accuracy and associated statistical spread of such RVE’s, e.g., as a function of recycling-induced compositional variations in the microstructure? How to extract fidelity measures for synthetic RVE data?

##### Methods

This project aims to address the DEPMAT challenge described above, with the goal of providing guidelines for constructing optimal RVE’s in terms of accuracy, as well as identifying their fidelity. To this end, first, a few series of representative, relevant microstructures (from steel grades provided by the DEPMAT industrial partners), will be experimentally characterized in great detail, in terms of their microstructure (with electron backscatter diffraction and transmission electron microscopy) and their micromechanical response (by means of micro-tensile testing under insitu scanning electron microscopy, combined with digital image correlation to obtain high-resolution surface strain fields [28]). In addition, nano-tensile tests on isolated grains and phases will provide the constitutive input response of the constituents. Subsequently, numerical RVE’s with different levels of fidelity will be constructed and confronted, in a statistical sense, with the experimental data. Meaningful fidelity measures will be integrated in the data manifold, and the quantitative differences will be used to improve the underlying RVE models. To increase the resilience of the RVE simulations against composition variations, the loss of accuracy due to deviations from the original RVE microstructure will be unravelled by confrontation with the experimental data sets.

##### Title

Inline hybrid modelling in cold rolling and forming

##### Description

The objective of this PhD project is to develop highly accurate hybrid models that can be used to relate indirect process measurements in metal forming processes (e.g. process forces or intermediate product geometry) to the material, product, and process properties. Key challenges in this respect are the limited accuracy of physics-based models, incomplete production data, uncertain fluctuations in process conditions and requirements for fast models. A new type of process model must be developed, by exploiting the strength of physics-based simulation models and of real-time production data.

##### Supervision:

Dr.ir. J. Havinga (UT-NSM), Prof.dr.ir. A.H. van den Boogaard (UT-NSM)

##### Scientific challenges

The objective of this project is to develop high accuracy models that can be used to relate indirect process measurements (e.g. process forces or intermediate product geometry) to the material, product and process properties. Several effects, such as elastic springback and strong nonlinearities at the micro and macro level, make that metal forming processes are by nature challenging to predict accurately. Deviations between the simulation model and the actual process arise both from deliberate simplification as well as from lack of knowledge about the underlying physics. Either way, additional information is required if the model is to be brought closer to the actual process. Data from the production line that is gathered in large amounts may potentially be used to bridge this gap, although several challenges must be addressed first in order to do so.

A key challenge in this respect, is that not all input and output variables of the process model are measured in the production line. Several indirect measurements such as process forces and final product properties can be measured in production, although these may not yield a unique solution for correction of the model if the relation between measurements and model inputs is insufficiently strong. Furthermore, it is expected that the deviation between physics-based model and actual process may vary over time, as properties that are related to unmodelled physics may drift during production (e.g. due to tool wear). Consequently, the corrective procedure must maintain a certain flexibility, and assign a higher predictive value to the most recent production data. Finally, aiming to use the hybrid model for real-time process monitoring and model update, imposes stringent requirements on the model evaluation time, which must be orders of magnitude faster than conventional FE models. It is common practice to create surrogate models that are based on a large number of offline evaluations of an expensive FE simulation, with methods varying from simple polynomial regression to interpolation methods to machine learning. In many cases, model order reduction methods are used in the surrogate modelling procedure to reduce the dimensionality of the output space.

These challenges go beyond the state-of-the-art research in hybrid modelling for metal forming, which is based on stationary corrections with complete datasets. The development of a hybrid model correction procedure with incomplete production data will require a design in which the physics-based model complexity, the corrective model complexity, and the fitting procedure are properly balanced, in order for the procedure to successfully identify the

proper model correction within reasonable computation time.

##### Methods

The basis of the hybrid cold-rolling model is the FE simulation of the process. A detailed single stand model will be developed following the state-of-the-art modelling procedures for such processes. The model must include the

most relevant parameters with relation to the process measurements and the product properties of interest. It must also be parametrized in such way that it is applicable for a sufficiently wide range of material behaviour and process

conditions. Also, a wide range of entry and exit sheet thicknesses must be modelled, as multiple rolling stands will be incorporated in the full cold rolling state estimation procedure.

In order to convert the expensive FE model to a surrogate model that can be used for inline process monitoring, it will be investigated which model reduction and fitting techniques may be used for this purpose, also outside the range of methods that have been previously used in the involved research group. The notion that the physics-based model must later be corrected with production data will be accounted for in the selection of the proper reduction method.

The key target in this project will be the design of a procedure to correct the physics-based model using production data. This will be done in close collaboration with the postdoc from project 2e, who will dive into the mathematical foundations for the development of hybrid models given the specific requirements of industrial metal forming. Based on large sets of measured production data, different configurations for the physics-based surrogate model, the corrective function, and the fitting procedure will be tested and evaluated. The studied methods will include probabilistic approaches, as it is foreseen that these may be the best solutions for this application, given the potential non-uniqueness of the solutions, as well as the need to incorporate the model in a probabilistic state estimation procedure in PhD project 2b.

A separate target in this research project will be the design of a simple forming test that can be used by the manufacturer of the end product to determine specific characteristics of the material behaviour of each new material batch. Such additional indirect measurement may be designed towards specific properties (e.g. hardness, ductility or anisotropy) and as such it will complement to the information that can be derived through indirect process measurements.

##### Title

Inline probabilistic state estimation and model correction

##### Description

In this PhD project, fast and accurate procedures will be developed to simultaneously estimate process conditions and apply hybrid model correction. The developed procedures must be applicable in real-time during production. The methods must be formulated within a probabilistic framework and will therefore require a specific focus on the estimation of process statistics, process correlations and model uncertainty.

##### Supervision:

Dr.ir. J. Havinga (UT-NSM), Dr. P.K. Mandal (UT-HS), Prof.dr.ir. A.H. van den Boogaard (UT-NSM)

##### Scientific challenges

Given the hybrid model from PhD project 2a, and indirect measurement data from the production line, it will be the challenge to develop a real-time estimation procedure that is able to both determine the (non-stationary) hybrid model correction function as well as to estimate process and product properties, especially related to the material properties of the produced steel sheet. This work will be fully based on the concepts of Bayesian inference. Although the field of Bayesian inference is well-developed and the concepts are widely applied, many significant obstacles have to be overcome in order to successfully implement such procedures for production processes such as cold rolling, given the complexity of the relations between hidden states and measurement data, in combination with the fact that no sufficiently accurate model is initially available for this purpose. Although procedures have been developed for simultaneous model correction and state estimation, the application of such combined procedures will be new to the field of metal forming research.

Two essential building blocks of the Bayesian inference procedure will be a main focus in this project. Firstly, expected levels of model uncertainty have to be quantified in order to determine probabilistic estimates of the model correction functions. These estimates of model uncertainty are closely linked to the space of model correction functions that are included in the hybrid model. Close collaboration with PhD project 2a and PD project 2e is essential in this regard. Secondly, the rate at which the hidden states (such as material properties, lubrication properties, tooling alignment, etc) are expected to change over time must be included in the estimation procedure. These statistical models characterize the speed at which process conditions may drift during production, and adequate models of process dynamics are essential in order to be able to make accurate estimations during production.

Finally, given a complete set of process and (hybrid) measurement models, with a predefined model input space to be estimated, it will be a key question how to numerically solve the estimation procedure, such that it will be applicable for inline use. The choice of the proper algorithm for solving the Bayesian equations will be strongly dependent on the architecture of the hybrid model.

##### Methods

Estimation of the dynamics of process variations is a key component of the Bayesian inference procedure. As the application of such methods for inline state estimation is still out of the scope of the metal forming research community, little knowledge has been gathered about such process dynamics. Dedicated experiments will be performed to shed a light on the rate of change of different process disturbances (such as variability of lubrication conditions and of incoming material properties). The spatial and temporal variability of these disturbances will be characterized with standard statistical models, for which the statistical coefficients can be tuned based on production data. Special attention will be given to the change of process conditions from coil-to-coil, and within a coil.

Similarly, model uncertainty quantification for such models is another open issue in the research field. In collaboration with Postdoc project 2e, different uncertainty quantification methods will be investigated. The uncertainty measure must be clearly linked with the estimation of the hybrid model correction parameters, in the sense that updating the hybrid model with process data will reduce the uncertainty of the model. In the end, the ratio between model uncertainty and measurement uncertainty will strongly influence the estimation trajectories. Therefore, the applicability of different uncertainty estimation methods can be verified based on the observed performance of the state estimation procedure.

Finally, hybrid model, statistical process model, and Bayesian inference procedure will come together in the inline estimation procedure. Depending on the hybrid model architecture, different Bayesian inference algorithms may be

employed. Numerical efficiency is of high importance, especially as the cold rolling process consist of multiple stands, leading to a high-dimensional state-space that has to be tracked. When using relatively simple model architectures,

(semi-)analytical algorithms such as the Extended Kalman Filter may be preferred. When using more complex nonlinear surrogate models, numerical integration of the Bayes equations must be implemented, for example using particle filtering. It is foreseen that state-of-the-art algorithms must be employed in order to prevent particle degeneracy and sample impoverishment in the estimation procedure.

##### YOUR PROFILE

- A MSc. degree in Computational mechanics, Computational materials science, Mechanical engineering, Applied physics, Data science or a related field with excellent grades.
- Special interest in modelling of production processes.
- A background in nonlinear solid mechanics, computational methods, material science and/or data science.
- Strong programming skills.
- A high degree of responsibility and independence.
- Strong communication skills for effective academic and industrial collaboration.
- Proficiency in English is required, both spoken and written (IELTS minimum score 6.5 or TOEFL-iBT minimum score 90).

##### OUR OFFER

- A dynamic and international environment, combining the benefits of academic research with a topic of high industrial relevance;
- excellent working conditions in an exciting scientific environment, and a green and lively campus;
- a fulltime 4 year PhD position;
- excellent mentorship and facilities;
- a professional and personal development program within Graduate School Twente;
- a starting salary of € 2.541 in the first year and a salary of € 3.247 in the fourth year gross per month;
- a holiday allowance of 8% of the gross annual salary and a year-end bonus of 8.3%;
- minimum of 29 holidays per year in case of fulltime employment;
- full status as an employee at the UT, including pension and health care benefits.

##### INFORMATION AND APPLICATION

Please submit your application **before January 13th, 2023** using the “Apply now” button, and include:

- curriculum vitae
- letter of motivation
- grades of the BSc and MSc courses
- IELTS or TOEFL score
- contact information of 2 references

The intended starting date is between March and June 2023.

For more information you can contact: Prof. Ton van den Boogaard, head of the chair Nonlinear Solid Mechanics, phone: +31 (0)53 489 4785, e-mail: a.h.vandenboogaard@utwente.nl,

Dr. Semih Perdahcioglu (**position 1b**), phone: +31 (0)53 489 2675, e-mail: e.s.perdahcioglu@utwente.nl, Dr. Jos Havinga (**positions 2a and 2b**), phone: +31 (0)53 489 6869, e-mail: jos.havinga@utwente.nl.

First (online) interviews will be held on January 24th and 27th, 2023.

A Game-Based assessment will be part of the selection procedure.

##### Title:

Hybrid modelling for inline estimation of oxide layer formation

##### Supervision:

Dr. D. Zarouchas (TUD-ASM)

##### Scientific challenges

Surface quality of steel strip is an extremely important property for many applications. The surface quality is to a large extent determined by oxidation in the various process steps of steel production. Besides outward appearance, the

oxide scale also has a large impact on the emissivity of steel which plays a major role in the (perceived) temperature evolution of the strip during the steel production steps. If not taken properly into account it can reduce the accuracy

of the pyrometers used as sensors for the strip temperature, and it also directly influences the temperature evolution in (e.g.) a continuous annealing line. When formation of oxides is not accounted for in both temperature measurements and the process settings of the annealing line, material properties will be severely affected due to the incorrect temperature evolution during annealing.

Ideally, the hot strip mill process is steered such that no detrimental oxides are formed. As this is often not possible, alternatively, information about formed oxides can be used to estimate the change in surface emissivity, and therefore

be taken into account in the process control models of the continuous annealing line. Existing (physical) models that predict the oxide layer thickness as a function of the hot rolling process conditions and the composition are being developed further in the DENS (Digitally Enhanced New Product development) program. However, the accuracy of these models is still limited because the exact conditions in terms of temperature and oxidizing atmosphere in a hot strip mill is difficult to characterise (and measure). Furthermore, the oxidation is strongly related to the exact material composition, and therefore significantly affected by the addition of tramp elements.

The oxide layer thickness and the surface emissivity are the state variables that can be used to adjust the process settings in the annealing line. However, these cannot be directly measured, but may potentially be inferred based on a set of indirect measurements that are affected by the oxide layer. In order to do so, predictions from the existing physical models must be aggregated with different sources of process data, being data from optical camera systems, pyrometer measurements from the annealing line, and electromagnetic measurements. It is expected that these data sources can be combined in a hybrid model that can be used to predict strip emissivity, in order to adjust the annealing process and therefore control the material properties.

##### Methods

The aim of this PhD project is to correlate the oxide layer thickness and the surface emissivity with indirect measurements and predict the strip emissivity in order to adjust the annealing process. The model of the strip emissivity is considered to be a multi-state stochastic process which can be captured utilizing the process monitoring data, i.e. images, pyrometer and electromagnetic measurements. The nonhomogeneous hidden semi Markov model (NHHSMM), the generalized form of the Markov chain, offers the mathematical background which can capture the dynamic procedure of oxidation. This procedure depends on the current state, the time spent in this state and the total time of the process until that state. As such, the NHHSMM can be used as a machine-learning algorithm that could be trained to make the correlation between the state of oxidation and the measurements (observation) and predict the strip emissivity. The prediction will be calculated with confidence intervals, ensuring production of steel with desired mechanical properties. During the training process of the algorithm, an important task is the selection of features from the data with the highest value of information. In order to achieve that the physical model will be employed as a basis to select the features that provide the most accurate results and, neural networks (NN), i.e. convolutional NN and deep learning, will be employed in order to identify these features within the plethora of data by filtering measurement uncertainties.

##### Title:

Microstructure estimation with electromagnetic sensing

##### Supervision:

Dr.ir. J. Havinga (UT-NSM), Dr.Habil. S. Soyarslan (UT-NSM)

##### Scientific challenges

As compared to the conventional destructive offline testing methods, electromagnetic sensor measurements have advantages: they are non-destructive, online and relatively fast. Nevertheless, inferring magnetic properties from sensor signals and relating it to the material’s microstructure is a demanding task. Of particular concern is the residual stresses, e.g., due to rolling processes to which the steel strip is subjected, and their orientation acting on material’s microstructure. This requires consideration of the magneto-mechanical coupling, also referred to as magnetostriction. Although there exist studies on finite strain magneto(visco)elasticity, a systematic understanding of how magnetostriction contributes to the polycrystalline magnetic response in the presence of finite plastic strains is still lacking. Moreover, constituting the medium hosting the electromagnetic fields, the threedimensional microstructural geometry, with phase distribution, morphology, size, and boundaries should be quantified/modelled accurately, and corresponding stochastic sensitivity analysis which explicitly considers the associated uncertainties need to be addressed. Another potential challenge emerges with the utilized field intensities in electromagnetic sensors. For low frequency-low intensity time harmonic eddy current tests, linearized sensor output modelling with linear effective magnetic properties suffices. Higher field intensities, however, require hysteretic B-H curve and corresponding nonlinear and transient sensor output modelling.

##### Methods

This project will extend ongoing research at the UT on physics-based modelling of electromagnetic measurements of steel sheets towards the application in a steel production line where a wide variety of grades are being produced. The following key tasks will be performed:

Microstructure geometry modelling: Existing methods either use 2D finite element models from 2D micrographs or 3D generations using extended Voronoi tessellation methods. The former does not allow sufficient accuracy in predictions by restricting the magnetic flux paths. The use of the latter method is, although allowing the control of grain size, grain morphology, texture, phase ratios and their distributions, limited to certain microstructural forms. Instead, more general statistical microstructure regeneration methods which require micrographs with only three mutually orthogonal normals, instead of the fully resolved 3D tomographical images, will be used to create proper 3D microstructure representations. These will be enriched with inclusion of magnetic domains.

Phase-field modelling of magnetic hysteresis and magnetomechanical coupling: As a continuum-diffused interface approach, phase-field models allow tracking the evolution of magnetic domains. Combining this with a robust finite element-based computational homogenization framework, we will predict the nonlinear hysteretic magnetic response, initially in absence of mechanical effects. Existing methods in magnetomechanical coupling in large deformations are limited to finite magneto(visco)elasticity. Metal forming processes, however, require finite plastic strains to be considered, e.g., in cold rolled steels. To this end, developed phase-field based computational homogenization framework will be extended to involve a variationally consistent magnetomechanical coupling at finite elastoplastic strains.

Modelling electromagnetic sensor output: This work-package concerns finite element modelling of electromagnetic testing with effective material properties for the sheet. This allows one to compute sensor outputs for various field

intensities and frequencies and then relating these to experimental outputs. Making sensitivity analysis possible, finite element method allows one to optimize sensor systems according to sensor geometry and location.

##### Title:

Mathematical foundations of hybrid modelling for complex manufacturing processes

##### Supervision:

Prof.dr. J. Schmidt-Hieber (UT-MOR), Dr.ir. J. Havinga (UT-NSM)

##### Scientific challenges

Correcting physics-based process models based on observed experimental data is the key objective of WP2. The challenge is to define a correction space that guarantees that the main relations from the physics-based model are maintained, while being sufficiently flexible to correct for the model inaccuracies. The choice for a function space that maps the uncorrected to the corrected data requires assumptions about the proximity between uncorrected and corrected models. Given such assumptions, a suitable correction method has to be developed, including the selection of specific types of corrective function spaces that maximize the detectability of a suitable solution, which should also be robust to outliers in the dataset.

Besides the aforementioned fundamental modelling framework that is yet to be investigated, several additional challenges emerge from the application to these highly complex metal manufacturing processes. Firstly, the available

data that can be used for the model correction will not be complete, in the sense that not all input and output variables from the physics-based model can be directly measured in the manufacturing process. Secondly, the nature of the process implies that the correction may not be stationary, but that the actual difference between the physics-based model and the actual process may vary over time, due to drifts of unmodelled conditions (such as tool wear). Thirdly, the processes consist of multiple steps, for which the accuracy of the models and of the measurement data may differ strongly. The research towards a mathematically sound approach for model correction must therefore be performed within the set of constraints mentioned above.

##### Methods

We will employ one postdoc with background in mathematics/statistics/machine learning for the modelling of the steel production process. To address this challenge, our main idea is to design a Bayes type approach that assigns the physics-based model the role of the prior and allows the data to correct the prior model if there is enough evidence in

the measurement. The corrective function space and required evidence levels depend on prior assumptions about model and measurement uncertainty. The relations between assumptions, model parametrization and performance

(i.e. detectability and robustness) will be investigated in a mathematically rigorous way by adapting tools from the well-developed field of nonparametric Bayes.

The uncertainty quantification (UQ) of the models is a key component of the procedure. While UQ comes as a byproduct of Bayesian methods, to do uncertainty quantification for deep learning and other machine learning methods, several suggestions have been made in the literature that we can use as a starting point.

The modelling procedure might require additional regularization in the individual models to stabilize the overall outcome. We expect that the performance of the full model depends on the modelling of a few crucial stages in the cold rolling line for which only indirect measurements are available. By making the link to the literature on finding optimal design distributions, we will then try to come up with recommendations on how to collect additional

measurements in order to optimally improve the overall model performance.

##### Title:

Fast micromechanical models for (data-enhanced) materials engineering

##### Supervision:

Prof.dr.ir. M.G.D. Geers (TU/e-MoM), Dr. S. Kumar (TUD-MSE)

##### Scientific challenges

Macro-scale mechanical properties of steel are predicted by high-fidelity numerical models of micro-scale material domains. These models exist for the mechanics of multiphase metallic microstructures that generally consider a

Representative Volume Element (RVE) of the microstructure, in which the individual grains are typically modelled by state-of-the-art single-crystal plasticity. These models are an essential part of current generations of through-process model chains (or digital twins) of materials production, in which they serve to predict the mechanical performance of RVEs generated by microstructure evolution models which precede them in the process chain. Their input generally consists of the phase distribution in space, orientations of the individual crystals, chemical composition of the phases, etc. The main output of interest is the hardening behaviour and formability of the (in silico) material. The significant computational cost of the current numerical models is due to the complexity of the (three-dimensional) microstructural geometry and constitutive modelling, together with the fine spatial and temporal discretization required to solve them. At the same time, there is a constant push for the models to involve (and predict) more details of the microstructure.

##### Methods

PhD 3a aims at speeding up the micromechanics simulations substantially (at least two orders of magnitude), at the cost of some (but not too much) of the fidelity of the detailed modelling. The researcher will address core challenges

in handling composition variations in through-process modelling, as necessitated by more circularity in the lifetime of metallic materials:

- Quantitatively predict elasto-plastic responses of steels from micro-scale RVEs including (segregated) composition data generated earlier in the model chain;
- Accelerating such predictions (the main objective of the work package) by faster micromechanics simulations, implying that more (computational) trials can be done and much more data can be generated, resulting in better optimized processing and hence better materials with quantifiable uncertainty.

This will enable process designers to develop methods which are “adaptive” in the sense that depending on input data (as composition) and data from earlier processing steps, subsequent production/manufacturing steps can be adjusted.

A secondary industrial objective which the project aims to support is the control of anisotropy of the produced sheet material. The origins of coil-to-coil (or even intra-coil) variations of anisotropy are insufficiently understood. We plan to employ the microstructural modelling developed specifically to study the role which microstructure plays in this problem.

PhD 3a will achieve the acceleration of the RVE models by combining dedicated computational reduction methods akin to reduced order modelling to exploit the similarity of different realizations of statistically similar microstructures with careful idealization of the microstructural geometry and constitutive behaviour – e.g. where possible eliminate irrelevant details from the geometry, exploit symmetries, replace crystal plasticity by isotropic plasticity, replace (coupled) damage models by damage indicators, etc. Each of these reduction methods needs to be carefully assessed in terms of its effect on the fidelity of the modelling by comparison with a set of full-scale simulations, so that users can make a well-informed trade-off between accuracy and speed. At the same time, as a secondary aim, PhD 3a plans to refine the microstructural modelling by taking into account the spatially resolved compositions from as delivered by the methods to be developed by PhD 3b, which depend on the results of PhD 3c.

##### Title:

Accelerating Microstructural Modeling of Metallic Microstructures with Machine Learning

##### Job description

The last decades have seen remarkable progress in predictive computational modeling of microstructural evolution and macroscopic mechanical properties of metals. However, these high-fidelity models are only accurate when they use vast computational resources. Therefore, these models are viable for predicting a small set of macroscopic properties with known composition under particular processing conditions, but they are a major impediment to on-demand design of new customized metals with adequate performance for a given application.

Machine learning has emerged as a promising candidate to bypass the aforementioned limitations and enhance the classical computational modeling frameworks. This project will focus on developing fundamentally new physics-integrated deep learning framework for accelerating steel microstructure evolution models by orders of magnitude and enable fast macroscale property prediction and processing optimization, thereby opening avenues to novel materials-by-design.

The project presents a unique opportunity to work in a collaborative team and consortium of academic research groups and industrial partners. This position will be embedded in the research groups of Dr. Sid Kumar (Mechanics, Materials, and Computing Group, TU Delft; mech-mat.com) and Dr. Kees Bos (Tata Steel Europe and TU Delft) along with collaboration with Dr. Miguel Bessa (Brown University).

##### Title:

Physics-Based Modelling of Segregation of Impurity Elements and Its Effects on Metallurgical Processes

##### Job description

In a more sustainable world recycling is essential. Recycling of steel causes high concentrations of impurity elements such as copper, that segregate at grain boundaries. Such local segregation has a big impact on physical phenomena during processing and on the final properties of products. There are many types of grain boundaries and many types of impurities, while modeling physical phenomena can be challenging in its own right. Therefore significant computational resources are required for predictive computational modeling. To optimally utilize modelling data, and to enhance predictive ability, machine learning in various forms is employed.

The project presents a unique opportunity to work in a collaborative team and consortium of academic research groups and industrial partners. This position will be embedded in the research groups of Dr. Marcel Sluiter (TU Delft & U Ghent), Dr. Sid Kumar (TU Delft), and Dr. Kees Bos (Tata Steel Europe and TU Delft) along with collaboration with Dr. Miguel Bessa (Brown University).

##### Title:

AI Process Model

##### Supervision:

Dr.ir. L.Y. Chen (TUD-DS), Dr.ir. K. Bos (TUD-MSE)

##### Scientific challenges

Steel production process has been explored via traditional process modelling, e.g., DENS project. The drawbacks of these approach are time consuming, rigid stepwise input-output modelling, and hence lacking predictability for unseen composition and capability of missing information. As a result, it cannot easily be adapted to incorporate other models proposed in this project, e.g., data driven model of hot rolling, because of different fidelities and even missing information.

The aim of this task is to design an over-arching process model that can integrate output of physical models, process measures and fast models and then predict the final performance. To derive an AI process model and overcome the limitations faced by the state of the practice, we will answer the following questions:

- What is the optimal AI model that can integrate the spatial and temporal inputs of missing values, in junction with semi and incorrect supervision from the exiting process model?
- How to incorporate the knowledge of existing process models? What type of supervision of standard process model can provide to the AI models? How to deal with the inaccurate prediction
- How to predict the impossible, i.e., unseeing composition, beyond the physical limit? How to construct a small-scale laboratory experiment and incorporate expert opinions to increase predictability

##### Methods

To develop over-arching process models addressing aforementioned problems, we will draw methodologies from multi-view learning models, which explicitly tackle and integrate views/models to form a cohesive prediction.

Models obtained from WP1-WP3 will be regarded as one of the views/models, whose input and output will be provided as a pair input to the process model. For instance, predicting the rolling thickness can be represented by the views of microstructure and process measurement, which take the same composition as inputs. The first challenge here is to capture the spatial and temporal dependency of different views/models. For instance, physical models may be slower to provide the input pairs needed for the process model than the data models. We will hence use neural networks that can explicitly incorporate temporal dependency of inputs, e.g., recurrent neural networks and auto regressive. This leads to the second challenge of how to deal with missing input pairs of slower models. We will leverage the joint spatial and temporal dependency, i.e., extrapolating/imputing the missing inputs of time t from other models of time t or the inputs from t-1. To mitigate the risk of relying on unconfident models, we plan to conduct additional experiments that can provide additional input pairs through approaches of design of experiments and active learning.

The former will serve a means to obtain additional laboratory results and the latter provides a rigorous framework to incorporate opinions of process engineers.

##### Title:

Meta/Transfer Learning

##### Supervision:

Dr.ir. L.Y. Chen (TUD-DS), Dr.ir. K. Bos (TUD-MSE)

##### Scientific challenges

The state of the art derives and train the process model from scratch even though there are similarities in the overall processes, lacking the capability to leverage prior knowledge. The emerging methodologies in meta learning and

multi-task learning aim to exploit the similarities in learning tasks and increase the adaptation. However, little is known how to apply meta or multi-tasking learning on material process models which are of composed of multiple neural networks corresponding to different physical and data models.

The aim of this task is to develop a methodology that can agilely identify the common mathematical representation across steel or mental production process such that the trained AI process models can be easily adopted from one well established production process to another. To derive such an adaptation methodology to transfer from one metal process model to another metal, we will answer the following questions:

- Can the proposed transferring methodology be applied on cross-metrics and cross metals? E.g., hardness vs. impurity? Should we use meta-learning approach or multi-task approach?
- How to identify a set of common features across metal production of interests? Microstructure is universal across metals but has the inherent challenge of representing/interpreting its 3D nature.
- How to design artificial network architecture and training procedure that can represent the common features in different metal processes and still preserve the specific component?

##### Methods

To address the fast adaptation to modelling different properties and steels, we will explore methodologies from state-of-the-art multi-task and meta learning. The former, e.g., Mocha focuses on the network architecture of common

and unique features of different metals. The latter, e.g., MAML focuses on the adaptation from one train model to another, by using only a few shots of additional experiments. We will start with the transferability across different

performance metrics, as the underlying system and all the modelling components are identical. Thus, we can avoid the complex problem of identifying the common components across “learning tasks”, in the case of transferring across different metal productions. We assume the AI process model shall be metrics specific while sharing similar structure.

Under the framework of multi-task learning, we will assign one metric per task, all of which share common neural networks. As for meta learning, we will first focus on one specific metric and then use the optimization framework adopting other metrics. The second phase of this PhD study will focus on model transfer across metal productions, which only have partial overlap, e.g., cold rolling. We will focus on the commonly shared process and extract the corresponding components in AI process model. We expect domain expert knowledge from WP1 is highly crucial in achieving this task. We then can apply a similar approach as developed for cross-metric model transfer.