Specific Research Themes

PhD Program in Computer Science and Systems Engineering

XXXIII Cycle - Specific Research Projects

Curriculum: Computer Science

and Data Science

Computer Engineering

Curriculum: Secure and Reliable Systems (funded by FBK - Trento)

Curriculum: Systems Engineering


 Computer and Data Science


Machine Learning methods for big data in biology and medicine.

Proposer: Annalisa Barla, Alessandro Verri
Research area(s):  Computational Biology
Curriculum: Computer Science

Data made available by new technologies in medicine such as Next Generation Sequencing pose complex problems from the viewpoint of understanding the real structure underlying the biological/medical phenomenon under study (i.e. finding explanatory genes for a given disease). Indeed, new computational methods are needed to deal with: (a) ever increasing dimensionality, (b) discrete nature of the measures, (c) high correlation among measured molecular variables, (d) incompleteness of measures, (e) sparseness of the underlying model and (f) computational limits of existing methods and algorithms.

This research project aims at investigating:
- statistical learning methods customized for genomics sequencing data, focusing in particular on how to deal with categorical and incomplete or unavailable measures
- well-founded methods that enhance the interpretability of biological results inferring, for instance, the interactions among molecular entities (network reconstruction)  

Link to the group or personal webpage:


Computational Intelligence and Well-Being Technologies

Proposers:  Francesco Masulli e Stefano Rovetta
Curriculum: Computer Science
Research area(s): Computational Intelligence, Well-Being Technologies

In the last few years many low cost sensing devices have become available in the electronic consumer market, including activity-tracking wristbands with accelerometers and heart rate monitoring, skin conductance and temperature; motion and gesture trackers; eye trackers; and wireless brain activity trackers employing few selected EEG channels. These sensing devices, possibly integrated with standard computers and mobile phones and tablets, make it possible to develop individual health and wellbeing supporting systems, with functions such as monitoring the health state of fragile people, sleep analysis, and serious games for cognitive and physical rehabilitation. To effectively exploit this wealth of real-time, multi-modal data, however, simple storage and off-line inspection is not sufficient. Computational intelligence and learning machines, which are currently applied in a variety of problems and settings, provide solutions to this problem. New theoretical advancement in this field can boost the methodologies we apply in our applicative projects, including supportive technologies for “fragile” people, methods for early detection of behavioral changes, hedonic IT systems design. In particular, the development of more powerful and flexible stream-oriented data clustering methods offers the potential for boosting the data mining capability of automatic systems. Possible outcomes of this research can be the development of effective methods to cope with large data size, throughput, and information content, for low-storage learning, and for learning in the presence of concept drift.

Link to the group or personal webpage: http://www.disi.unige.it/person/MasulliF/ricerca/index.html

References: Filippone, M., Camastra, F., Masulli, F., & Rovetta, S. (2008). A survey of kernel and spectral methods for clustering. Pattern recognition, 41(1), 176-190.

Browsing in videos datasets

Proposer(s):  Francesca Odone
Research area(s):  Computer Vision, Machine Learning
Curriculum: Computer Science


Image understanding is a mature research field which recently reached very significant results. In the last few years video understadning raised as a complementary task, where one aims at associating a semantic information to a video, a portion, or a frame, considering appearance as well as dynamic information (what object? what action?).
The goal of this project is to address a video classification problem, with the goal of parsing possibly large sets of videos and detecting meaningful keyframes or video shots which can then be associated with a specific semantic content (the presence of a given object or a specific action). In the first exploratory part of the research we will consider state-of-the-art approaches to the problem, including but not restricting to CNN. See for instance [1,2] and references.
A main objective of the research will be to exploit context and complementary information, including cross-related information about the actor and the action being performed (see for instance [3]).

The reference applications we will consider are youtube videos and sports videos.

Link to the group or personal webpage:


[1] Karpathy et al “Large shale video classification with Convolutional Neural Networks” CVPR 2014 http://www.cs.cmu.edu/~rahuls/pub/cvpr2014-deepvideo-rahuls.pdf
[2] Alfaro et al “Action Recognition in video using sparse coding and relative features CVPR 2016.
[3] A dataset for action recognition and segmentation with multiple classes of actors http://web.eecs.umich.edu/~jjcorso/r/a2d/


Natural Language Processing Techniques for Multilingual  Ontology-driven Text Analysis

Proposer:  Viviana Mascardi
Research area(s): Natural Language Processing, Ontologies, Semantic Web 
Curriculum: Computer Science

An ontology is “a specification of a conceptualization”. It provides a human- and machine- readable formalism for modeling the classification of entities and the relationships between those entities.  The purpose of Natural Language Processing (NLP) is the identification of entities in unstructured data and the understanding of the relationship between those entities. By “ontology-driven NLP” we refer to the exploitation of a semantic model to understand what exists in unstructured dataThe research activity we propose, is concerned with exploiting natural language processing techniques for ontology-driven text analysis in a multilingual setting. The final goal is to design and implement a modular, scalable and general purpose infrastructure for text analysis, where the analysis is carried out by following an ontology which defines the relevant words for the domain and the peculiarity of the supported languages (negation, co-location, irony) are as hidden as possible to the user. The applications domain of such an infrastructure range from sentiment analysis, to disease symptoms identification, to query understanding and answering, all in a multilingual setting.

Link to the group or personal webpage:  http://www.disi.unige.it/person/MascardiV/

Maurizio Leotta, Silvio Beux, Viviana Mascardi, Daniela Briola: My MOoD, a Multimedia and Multilingual Ontology Driven MAS: Design and First Experiments in the Sentiment Analysis Domain. ESSEM@AAMAS 2015: 51-66

Angelo Ferrando, Silvio Beux, Viviana Mascardi and Paolo Rosso, Identification of Disease Symptoms in Multilingual Sentences: an Ontology-Driven Approach. MultiLingMine@ECIR 2016


Natural interaction in Augmented Reality environments 

Proposer: Fabio Solari
Research area(s):  Virtual Reality, Computer Vision, Visual Perception
Curriculum: Computer Science

Augmented reality (AR) allows us to build complex and interactive environments, where people can “live” different situations, by enriching reality through virtual layers that people can experience as if they were real. The considered AR hardware devices take into account both see-through devices (e.g. Hololens, Epson Moverio, Meta Vision) and video see-through ones (e.g Google Cardboard), since they are characterized by different features and issues to be solved. Users need a stable and coherent perception of the virtual contents in the environment in order to obtain a natural behavior within AR systems. In particular, the idea is to develop methods and techniques suitable to build AR applications that allow people (i) to manipulate real objects enriched by virtual contents that mimic different devices fully usable, and (ii) to walk in enriched environments, where the users can interact with virtual contents both on the ground and on real objects. In the former situation, the occlusions have to be handled in order to obtain a natural manipulation or use of the virtually mimicked device and, in the latter, the motion cues have to be considered in order to obtain an ecological interaction.

Link to the group or personal webpage:


G Maiello, M Chessa, F Solari, PJ Bex. The (In) Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images. PloS one 10 (10), e0140230, 2015.

A Canessa, M Chessa, A Gibaldi, SP Sabatini, F Solari. Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment. Journal of Visual Communication and Image Representation 25 (1), 227-237, 2014.

Drawing and placing patterns on 3D surfaces

Proposer: Enrico Puppo
Research area(s): Computer Graphics, Geometric Modeling
Curriculum: Computer Science


This research theme lies at the intersection between geometry processing, digital differential geometry, and interactive techniques.

Stationary stochastic patterns, exhibited by natural surfaces, can be reproduced digitally by noise functions with controlled Fourier spectra. Structural patterns, such as the ones found in handmade decorations, have not been studied in the literature yet. This research aims at providing direct support for drawing and placing patterns on 3D surfaces, represented as polygonal meshes, by rising the standard 2D vector image techniques to a Riemannian 2-manifold endowed with its own metric. This involves the ability to measure lengths and angles on a surface, follow directions in tangent space and, at a higher level, control directional fields on surfaces. The main challenge, with respect to the state of the art, is to provide computationally efficient methods that can support interactive techniques, and to seamlessly combine them in a consistent framework.

The following operations provide concrete examples in the context of this general framework:
1. Given a point on a surface, evaluate its neighborhood for a given radius. This task is related to measuring distances in the underlying metric. 2. Trace a smooth line or strip on the surface. Splines on surfaces require computationally-intensive methods. Piecewise-linear techniques may provide efficient approximate solutions that reduce computation significantly [PPGSC16]. 3. Fill a polygon on the surface. Transferring patterns on a disk-like patch bounded by a polygon requires a guidance field on the patch itself, which can be obtained on the basis of boundary and flow constraints [MTPPS15]. 4. Boolean operations between polygons on the surface directly extend the previous operations, providing fundamental building blocks for pattern generation. 5. By designing a smooth directional field on the surface, one can define the underlying metric. This functionality can be incorporated on the basis of recent results [PPTS14,PPGSC16].
6. Symmetry management can greatly help in laying out patterns. While extrinsic symmetries are relatively easy to address, incomplete or intrinsic symmetries are challenging. Existing approaches to deal with intrinsic symmetry [PLPZ12] can be integrated within the design of directional fields.

The topic is part of Project D-Surf: Scalable Computational Methods for 3D Printing Surfaces (Italian funding framework PRIN 2015). The main focus of the project is to study and develop new software technologies to support 3D printing, in particular scalable algorithms to design objects with patterns of predictable appearance and mechanical properties.The PhD student will have the opportunity to collaborate with the other partners of the project.

Link to the group or personal webpage: http://www.disi.unige.it/person/PuppoE/

[PPGSC16] "Tracing field-coherent quad layouts" Nico Pietroni, Enrico Puppo, Giorgio Marcias, Roberto Scopigno, Paolo Cignoni, Computer Graphics Forum, 35(7):485-496, 2016 (Pacific Graphics 2016)

[MTPPS15] "Data-driven interactive quadrangulation" Giorgio Marcias, Kenshi Takayama, Nico Pietroni, Daniele Panozzo, Olga Sorkine-Hornung, Enrico Puppo, Paolo Cignoni , ACM Transactions on Graphics, 34 (6) (SIGGRAPH 2015)

[PPTS14] "Frame Fields: Anisotropic and Non-Orthogonal Cross Fields" Daniele Panozzo, Enrico Puppo, Marco Tarini, Olga Sorkine-Hornung, ACM Transactions on Graphics, 33 (4) (SIGGRAPH 2014)

[PLPZ12] "Fields on symmetric surfaces" Daniele Panozzo, Yaron Lipman, Enrico Puppo, Denis Zorin, ACM Transactions on Graphics, 31 (4) (SIGGRAPH 2012) 



Querying heterogeneous and diverse graph data spaces

Proposer(s): Barbara Catania, Giovanna Guerrini
Research area(s): Data Intensive Computing
Curriculum: Computer Science


The wealth of information generated by users interacting with the network and its applications is often under-utilized due to complications in accessing heterogeneous and dynamic data and retrieving relevant information from sources having possibly unknown formats and structures. Processing complex requests on heterogeneous and diverse information sources, often represented as graph data spaces, can, thus, be costly, though not guaranteeing user satisfaction. Furthermore, dynamic contexts prevent substantial user involvement in the interpretation of the request. The aim of this research theme is to investigate an innovative solution to process the above mentioned requests, limiting user involvement by exploiting information on: (a) user context (geo-location, interests, needs); (b) data and processing quality; (c) similar requests repeated over time. Preliminary approaches in this direction have been proposed in [1,2,3].

Link to the group or personal webpage:



[1] Barbara Catania, Giovanna Guerrini, Alberto Belussi, Federica Mandreoli, Riccardo Martoglia, Wilma Penzo: Wearable Queries: Adapting Common Retrieval Needs to Data and Users. DBRank@VLDB 2013

[2] Barbara Catania, Giovanna Guerrini, Beyza Yaman: Context-Dependent Quality-Aware Source Selection for Live Queries on Linked Data. EDBT 2016

[3] Barbara Catania, Francesco De Fino, Giovanna Guerrini: Recurring Retrieval Needs in Diverse and Dynamic Dataspaces: Issues and Reference Framework. EDBT/ICDT Workshops 2017


Mixing induction and conduction in inference systems 

Proposers: Davide Ancona, Elena Zucca

Research area(s): Programming languages, Semantics

Curriculum: Computer Science


Inference systems with coaxioms [1] have been recently proposed to express judgments which cannot be defined inductively, since infinite proof trees are needed, but where the standard coinductive interpretation would not correctly capture the intended interpretation.  We will exploit the expressive power of coaxioms to support programming with non-well-founded data structures, such as infinite lists and graphs, in different paradigms, such as logic, object-oriented, functional [2]. On the foundational side, we will face some open problems, e.g., how to restrict the model to definitions of functions, and investigate the relation with other techniques which have a similar aim [3]. 

Link to the group or personal webpage: 





[1] D. Ancona, F. Dagnino, E. Zucca.  Generalizing inference systems by coaxioms. ESOP’17.

[2]  J. Jeannin, D. Kozen, A. Silva. Language constructs for non-well-founded computation. ESOP’13.

[3]  S. Gay, M. Hole. Well-founded recursion with copatterns. ICFP’13.


 Ubiquitous Computing, Internet of Things, Concurrent and Distributed Systems, Formal Methods

Proposer: Giorgio Delzanno
Area: Computer Science

The Internet of Things (IoT) is the network of physical objects embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with the manufacturer, operator and/or other connected devices. Physical items are no longer disconnected from the virtual world, but can be controlled remotely and can act as physical access points to Internet services.

"Smart" objects play a key role in the Internet of Things vision, since embedded communication and information technology have the potential to revolutionize the utility of these objects. Using sensors, they are able to perceive their context, and via built-in networking capabilities they would be able to communicate with each other, access Internet services and interact with people. The IoT worlds provides several interesting research challenges ranging from the integration of different platforms, for instance mobile and cloud environments, the analysis and validation of the huge amount of content generated by these applications via big data analysis,and processing techniques, collaborative protocols design, latency reduction/hiding techniques for guaranteeing real time constraints, large-scale processing of user information, privacy and security issues, state consistency/persistence. The goal of the research is to consider both theoretical aspects, e.g., application of formal methods for validation of IoT protocols, as well as practical aspects related to new programming methodologies and platforms for the development and orchestration of IoT applications.


An Abstract Machine for Event-loop Based Asynchronous Programs
D. Ancona, G. Delzanno, L. Franceschini, M. Leotta, E. Prampolini, M. Ribaudo, F. Ricca
Technical Report, April 2017

Testing Internet of Things Systems 

Proposers: Filippo Ricca, Paolo Tonella
Research area(s): Software Engineering, Software Testing, IoT applications
Curriculum: Computer Science

Internet of Things (IoT) is a network of interconnected physical objects and devices
sharing data through secure infrastructures, possibly transmitting them to a central control
server in the cloud. For example, thanks to IoT, trains are able to dynamically compute and report arrival
times to waiting passengers, cars are able to avoid traffic-jam by proposing alternative paths and
m-health systems are able to determine the right medicament dose for a patient.
As the IoT technology continues to mature, we will see more and more
IoT applications and systems emerge in different contexts.

Ensuring that IoT applications are secure, reliable, and compliant is of paramount importance since
IoT systems are often safety-critical. At the same time, testing these kinds of systems can be difficult
due to the wide set of disparate technologies used to build IoT systems (hardware and software) and the
added complexity that comes with Big Data (the three ``V'', huge volume, great velocity and big variety).
However, IoT software testing has been mostly overlooked so far, both by research and
industry. This is apparent from the related scientific literature, where proposals
and approaches in this context are rare.

The aim of this research theme is: 1) to investigate novel approaches and techniques for testing IoT
systems; 2) to build tools supporting the devised approaches; and, 3) to validate experimentally them.
As a first step, existing End2End testing tools for Web application and mobile testing
will be taken into consideration.

Link to the personal webpages:

M. Leotta, F. Ricca, D. Clerissi, D. Ancona, G. Delzanno, M. Ribaudo, L. Franceschini
Towards an Acceptance Testing Approach for Internet of Things Systems. EnWot 2017 (under review)


A Holistic Method for Business Process Analytics
Proposer: Gianna Reggio, Filippo Ricca
Research area(s): Software Engineering, Business Intelligence, Business Process Management and Modelling

Curriculum: Computer Science


In the last decade, the availability of massive storage systems, large amounts of data and the advances in several disciplines related to data science provided powerful tools for potentially improving the business activities of the organizations.
Indeed, the management can leverage the large amounts of data for extracting, by means of different techniques, useful information with the aim of improving the business processes and related activities. Process Business Analytics (BPA) refers to collecting and analysing the process-related data to answer some process-centric questions (see, e.g. [3] and [2]). [1] hints at the assumed canonical steps to be followed for a data analytics task: “a) developing questions to be answered, b) curating the potential data sources, c) collecting data from these sources, d) cleaning the collected data, e) storing it, f) processing/analysing the data, and then g) displaying and visualizing the data in response to queries.”.

Most often, organizations start introducing BPA from bottom, i.e. by trying to do some analysis on data that they have already at hand and found inside the many different systems and storage means available. However there has been a lot of work for organizing process data in a way suitable to answer the relevant questions (see for references Ch. 5 in [2]).

The aim of this project is to develop a holistic method combining business process modelling and data-driven business process improvement helping:
- connect the business processes, stakeholder’s goals with the stored data;
- elicit the right questions for improving the business activities, and successively selecting the right analytic technique for answering them;
- optimize the data collection and storage with respect the useful analysis.

Some initial ideas can be found in [4].

Link to the group or personal webpage: http://sepl.dibris.unige.it/index.php


[1] K. M. Anderson. Embrace the challenges: Software engineering in a big data world. In Proceedings of 1st IEEE/ACM International Workshop on Big Data Software Engineering, BIGDSE 2015, pages 19–25. IEEE, 2015.

[2] S.Beheshti,B.Benatallah,S.Sakr,D.Grigori,H.Motahari-Nezhad,M.Barukh,A.Gater, and S. Ryu. Process Analytics: Concepts and Techniques for Querying and Analyzing Process Data. Springer, 2016.

[3] M. zur Mühlen and R. Shapiro. Business Process Analytics, pages 137–157. Springer, 2010.

[4] Gianna Reggio, Maurizio Leotta, Filippo Ricca, Egidio Astesiano. Towards a Holistic Method for Business Process Analytics. In Proceeding of the Monterey Workshop 2016, Bejing October 2016. To appear in LNCS, Springer Verlag. 2017


Engineering complex applications with agent-oriented approaches

Proposers: Davide Ancona and Viviana Mascardi
Research area(s): Intelligent Agents and Multiagent Systems
Curriculum: Computer Science


Software engineers continually strive to develop tools and techniques to manage the inherent complexity of software systems. When systems involve not only software components, but also hardware devices (the Internet of Things) or physical processes (cyberphysical systems), their complexity becomes soon difficult to manage.

Many authors [WC01] argue that engineering methods and techniques developed in the intelligent software agents and multiagent systems research area offer the right tools to address such complexity. The aim of this research theme is:

1) to investigate novel approaches and techniques for engineering multiagent systems with a particular focus on their runtime verification using computational logic, along the lines of the previous and current research of the proposers [ABFM15a,ABFM15b,FAM17];

2) to build/extend tools supporting the devised approaches; and,

3) to design and develop real applications in some challenging domain, and to apply the devised techniques and tools to the runtime verification of the developed system.

Link to the group or personal webpage:



[WC01] Michael Wooldridge and Paolo Ciancarini. 2001. Agent-oriented software engineering: the state of the art. In First international workshop, AOSE 2000 on Agent-oriented software engineering, Michael J. Wooldridge and Paolo Ciancarini (Eds.). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1-28.

[FAM17] Angelo Ferrando, Davide Ancona and Viviana Mascardi. 2017. Decentralizing MAS Monitoring with DecAMon. In Proceedings of the 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, AAMAS 2017, To appear

[ABFM15a] Davide Ancona, Daniela Briola, Angelo Ferrando and Viviana Mascardi. 2015. Runtime verification of fail-uncontrolled and ambient intelligence systems: A uniform approach. Intelligenza Artificiale, vol. 9, no. 2, pp. 131-148.

[ABFM15b] Davide Ancona, Daniela Briola, Angelo Ferrando, and Viviana Mascardi. 2015. Global Protocols as First Class Entities for Self-Adaptive Agents. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS '15). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1019-1029.

  Computer Science Education: Theory, Tools and Applications

Promoters: Giorgio Delzanno and Giovanna Guerrini
Field: Computer Science

Computer Science Education is an exciting field that combines both conceptual and technology-driven aspects.
The Hour of Code, a worldwide movement for promoting computer science,  reached tens of millions of students in more than 180 countries. The use of visual languages for teaching programming concepts is one of the keypoints for the success of this initiative. In this context we are interested in the following research lines.
- Design of Computer Science Introductory Courses for Primary and Secondary School Teachers
- Design of new iinstallations for Programming activities in the Città dei Bambini in Genova
- Design of new languages and applications for teaching advanced concepts in computer science  based on
  modern languages and technology (languages and tools for Mobile Applications, the Internet of Things, Big data, etc)
This research is aimed at building the skeleton of a new Master specifically dedicated to Computer Science Education.
Furthermore, we are interested in innovative applications of existing tools used in Compute Science Education in other fields like rehabilitation, entertainment, etc.

In collaboration with Città dei Bambini e dei Ragazzi, Istituto Gaslini, Primary and Secondary Schools in Genova

D. Ancona, A. Barla, B. Catania, G. Delzanno, G. Guerrini, F. Odone, V. Mascardi, M. Ribaudo.
L'Ora del Codice è arrivata a Genova!  Didamatica 2015

 Computer Engineering


Automated formalization and analysis of high-level requirements  for safety-critical systems
Proposers: Massimo Narizzano,  Armando Tacchella
Research area(s): Artificial Intelligence, Software Engineering, Computer-Aided Verification and Reasoning
Curriculum: Computer Science

According to common industrial practice, requirements are specified in natu-ral language and checked for errors manually, e. g., by peer reviews. The shortcomings of this tool chain are well-known: the disambiguation of the (natural language) requirements is done by component specialists (instead of system specialists) during implementation and testing;  the cost and the error detection rate of the manual checks do not scale well with the number of requirements, since they aff
ect one another and cannot be analyzed in isolation. Further, a review can detect errors but never guarantee their absence. A tool chain for the formalization of requirements and the (subsequently possible) formal, automatic analysis of requirements opens the perspective of eliminating the above shortcomings. Much research has been invested recently in language and tool support for both, formalization and analysis. The question whether such a tool chain is feasible in practice can not be decided by a principled argument that applies uniformly to all practical settings; we need a number of feasibility studies which address the question on a case-by-case basis. The research is aimed to design, implement and evaluate a tool chain for the automated formalization and analysis of high-level requirements written in natural language. The goal of the research is to create new methodologies, algorithms and tools to ease the requirement analysis phase and provide basis for sound contractual specifications. The intended domain of application is that of safety-critical systems wherein a substantial investment in formal-based automated verification and reasoning is more likely to pay dividends in practice.

Link to the group or personal webpage: www.aimslab.org/tacchella

Ali Khalili, Massimo Narizzano, Armando Tacchella, Enrico Giunchiglia:Automatic Test-Pattern Generation for Grey-Box Programs. AST@ICSE 2015: 33-37
A. Post, I. Menzel, and A. Podelski. Applying restricted english grammar o automotive requirements | does it work? a case study. In REFSQ, pages 166-180, 2011.
M. P. E. Heimdahl and N. G. Leveson. Completeness and consistency analysis of state-based requirements. In IEEE Trans. on SW Engineering, pages 3-14, 1995.
C. L. Heitmeyer, R. D. Jeords, and B. G. Labaw. Automated consistency checking of requirements specifications. ACM Trans. SW Eng. and Meth., 5(3):231-261,1996.L. Yu, S. Su, S. Luo, and Y. Su. Completeness and consistency analysis on requirements of distributed event-driven systems. In TASE, pages 241-244, Washington, 2008.
A. Post, J. Hoenicke, and A. Podelski. rt-inconsistency: a new property for realtime requirements. In FASE, pages 34-49, 2011
J. Skakkebk. Liveness and fairness in duration calculus. In B. Jonsson and J. Parrow, editors, CONCUR '94, volume 836, pages 283-298. Springer, 1994. 

Computer automated design of elevator systems 
Computer Science
Armando Tacchella
Research area(s):
Artificial Intelligence, Software Engineering

Computer-automated design (CautoD) difers from “classical” computer-aided design (CAD) in that it is oriented to replace some of the designer’s capabilities and not just to support a traditional work-flow with computer graphics and storage capabilities. While CautoD programs may integrate CAD functionalities, their purpose goes far beyond the replacement of traditional drawing instruments and most often involves the use of advanced techniques from artificial intelligence. As mentioned in [BOP+16], the first scientific report of CautoD techniques is the paper by Kamentsky and Liu [KL63], who created a computer program for designing character-recognition logic circuits satisfying given hardware constraints. In mechanical design — see, e.g., [RS12] — the term usually refers to tools and techniques that mitigate the efort in exploring alternative solutions for structural implements, and this is the flavor of CautoD that will be considered hereafter. Elevators are complex implements whose design requires the combination of several standard components which must be fitted to custom spatial and usage requirements. Since human designers cannot simulate all possible viable models, they leverage “good design practices”, i.e., heuristics, that usually yield reasonable engineering solutions. On the converse, while a program might thoroughly simulate the space of alternative elevator designs, and attempt to reach a set of satisfycing designs using global optimization techniques, the process is not guaranteed to be computationally feasible. The goal of this research is to provide theoretical, methodological and experimental evidence to evaluate various potential approaches to CautoD of elevator systems, to serve as guidance for developing the prototype of a software system which can support various technical personnel involved in the design and realization of elevators.

Link to the group or personal webpage:

[AMT17] Leopoldo Annunziata, Marco Menapace, Armando Tacchella: Computer Intenstive vs.
Heuristic Methods in Automated Design of Elevator Systems. In 31st European Conference of
Modelling and Simuation, ECMS 2017 (to appear).

[BOP+16]Robin T. Bye, Ottar L. Osen, Birger Skogeng Pedersen, Ibrahim A. Hameed, and Hans
Georg Schaathun. A software framework for intelligent computer-automated product design. In 30th
European Conference on Modelling and Simulation, ECMS 2016, Regensburg, Germany, May 31 –
June 3, 2016, Proceedings., pages 534–543, 2016.

[KL63] Louis A. Kamentsky and Chao-Ning Liu. Computerautomated design of multifont print
recognition logic. IBM Journal of Research and Development, 7(1):2–13, 1963.

[RS12] R. Venkata Rao and Vimal J. Savsani. Mechanical design optimization using advanced
optimization techniques. Springer Science & Business Media, 2012.

Innovative solutions for AI planning and scheduling

Proposer:  Marco Maratea
Research area(s):  Artificial Intelligence, Planning, Scheduling
Curriculum: Computer Science, Systems Engineering

A longstanding goal of ArtificiaI Intelligence is to build models that represent the physical world as much precisely as possible; however, this comes with an obvious trade-off between their accuracy and the complexity of algorithms and tools that must solve the problem descrived by the model. In this context, classical languages and algorithms in AI, e.g. in planning, have been recently extended to reason on mixed discrete-continous dynamics, that brings significant challenges in these areas.
The first goal of this project is to design, implement and experiment with innovative off-line algorithms based on symbolic AI techniques.
The second goal is to apply the related languages and resulting tools to modeling and solving real-life applications; possible applications are urban traffic control, and scheduling in the biomedical area, e.g. nurse-scheduling.

Link to the group or personal webpage: http://www.star.dist.unige.it/~marco/

Efficient Macroscopic Urban Traffic Models for Reducing Congestion: A PDDL+ Planning Approach. 
Mauro Vallati, Daniele Magazzeni, Bart De Schutter, Lukas Chrpa and Thomas Leo Mccluskey 
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16).). [ PDF ]

Maratea, L. Pulina - Solving Disjunctive Temporal Problems with Preferences using Maximum Satisfiability.
AI Communications. Vol. 25(2), pages 137-156, 2012.

Innovative Methodologies for User Authentication on Mobile Devices
Proposer(s): Alessio Merlo (DIBRIS), Mauro Migliardi (DEI - UNIPD)

Curriculum: Computer Science

Areas: Computer Security, Mobile Security, Operating Systems

Description: Nowadays, data security is one of the most – if not the most – important aspects in mobile applications, web and information systems in general. On one hand, this is a result of the vital role of mobile and web applications in our daily life. On the other hand, though, the huge, yet accelerating evolution of computers and software has led to more and more sophisticated forms of threats and attacks which jeopardize user’s credentials and privacy. Today’s computers are capable of automatically performing authentication attempts replaying recorded data. This fact has brought the challenge of authentication and access control to a whole new level, and has urged the researchers to develop new mechanisms in order to prevent software from performing automatic authentication attempts. In this research we propose to explore innovative methodologies for user authentication capable of both leveraging and secure the typical multi-device setup (mobile, wearable and traditional systems) with which users confront daily. Link to the group or personal webpage:



 Internet of Things Security

Proposers: Alessandro Armando (DIBRIS), Enrico Cambiaso and Maurizio Aiello (CNR)

The Internet of Things (IoT) is an emerging technology in the industrial setting. It provides sensors the ability to transmit, elaborate and collect data so they can communicate one another and/or with human beings, in order to monitor and control the surrounding environment. Thanks to rapid developments in underlying technologies, IoT is paving the way to a large number of new applications that promise to improve the quality of our lives. Most of IoT communication stems from computing devices and used in machine-to-machine (M2M) communication. Devices are often equipped with limited hardware and offer “low level” security mechanisms only. Information exchanged by IoT devices may be sensitive and private, e.g. information about the presence of people inside a house, opening/closing doors, security sensors. Security solutions must protect assets and privacy by malicious activities. The goal of the research is to consider IoT security aspects, including novel attacks, networks and devices protection, and security of communication protocols, in order to ensure reliability and privacy of IoT-enabled systems and infrastructures.



Real-time Interactive Software Platforms for Embodied Multisensory Learning of Musical Instruments.

Proposer: Gualtiero Volpe

Research area(s): Human Computer Interaction
Curriculum: Computer Science

This proposal is in the framework of the Horizon 2020 European Project TELMI: main aim of TELMI is to study how we learn musical instruments, taking the violin as a case study, from a pedagogical and scientific perspective and to create new interactive, assistive, self-learning, augmented-feedback, and social-aware systems complementary to traditional teaching.The research will focus on a software platform for synchronized multimodal recordings, starting from state of the art at Casa Paganini – InfoMus (EyesWeb), and on the study and development of algorithms and techniques for the automated measure of expressive gestures performed by both student and teacher, and on the automated measure of social signals, such as entrainment and leadership, among students themselves in specific learning sessions, or between student and teacher.

Link to the group or personal webpage: www.casapaganini.org


A.Camurri, G.Volpe (2016) The Intersection of art and technology, IEEE Multimedia, Vol.23, No.1, pp.10-17, IEEE CS Press.

G.Varni, G.Volpe, A.Camurri (2010) A System for Real-Time Multimodal Analysis of Nonverbal Affective Social Interaction in User-Centric Media. IEEE Transactions on Multimedia, Vol.12, No.6, pp.576-590.

G.Castellano, A.Camurri, M.Mortillaro, K.Scherer, G.Volpe (2008) Espressive Gesture and Music: Analysis of Emotional Behaviour in Music Performance, Music Perception, Vol.25, No.6, pp.103-119, University of California Press.


Real-time Interactive Software Platforms for Embodied Multisensory Learning of Dance Expressive Qualities

Proposer: Antonio Camurri

Research area(s): Human Computer Interaction
Curriculum: Computer Science

This proposal is in the framework of the Horizon 2020 European 3-year Project Wholodance (2016-2018): Wholodance at both researching and innovating contemporary learning theories of embodied cognition and dance education, building on advances on neuroscience, pedagogical and learning theories, educational psychology together with new technologies in artificial intelligence and human computer interaction.
The research will focus on the following topics:
- Participation to the design and implementation of scenarios and scientific experiments, collaborating with the teams of project partners;
- Design and implementation of a repository of multimodal recordings of dance, to be used for machine learning techniques of automated analysis of dance, and for evaluation and validation of research results;
- Study and development of a software platform for the real-time analysis of dance movements, mainly focusing on expressive qualities, emotion (including entrainment and leadership, e.g. between teacher and student) in dance;

Link to the group or personal webpage: www.casapaganini.org

References (optional):

A.Camurri, G.Volpe (2016) The Intersection of art and technology, IEEE Multimedia, Vol.23, No.1, pp.10-17, IEEE CS Press.

A.Camurri, I.Lagerlof, G.Volpe (2003). Recognizing Emotion from Dance Movement: Comparison of Spectator Recognition and Automated Techniques. International Journal of Human Computer Studies, Vol.59, No.1-2, pp.213-225, Elsevier.

D.Glowinski, N.Dael, A.Camurri, G.Volpe, M.Mortillaro, K.Scherer (2011) Towards a Minimal Representation of Affective Gestures. IEEE Transactions on Affective Computing, March 2011, Vol.2, No.2, pp.106-118.

 Secure and Reliable Systems (Funded by FBK)


Data Protection and Privacy: process, technical, and regulatory issues (Funded by FBK)

Proposer(s): Silvio Ranise
Research area(s): Cyber Security
Curriculum: Secure and Reliable Systems

Description: In today's interconnected world, the security of on-line services is a continuously evolving endeavor as the threat landscape changes in real time making inadequate-shortly after their deployments-security policies, mechanisms, and tools. This fast-moving situation requires that organizations be constantly vigilant to ensure that their security posture remains strong by keeping their controls up-to-date. To add complexity, legal requirements protecting users' personal information must also be considered, suitably instantiated and enforced to comply with existing laws or regulations (such as the GDPR), which are changing over time. Ideally, security and compliance should be integrated to find the best possible balance between protecting data appropriately (including those subject to regulations) and guaranteeing the privacy of users. For this to become possible, it is crucial to develop methodologies and techniques that support the specification and automated analyses of regulations for data protection and privacy together with security solutions in a coherent and uniform way. This will be the main goal of the thesis that will be carried out in the context of a joint Cyber Security between FBK and Trentino Network (representing the Trentino public entities) whose main goal is to increase the Cyber Security Readiness of the local Public Administration.

Advisor: Silvio Ranise (head of the S&T research unit of FBK) received his PhD in Computer Engineering from the U. of Genova (Italy) and the U. H. Poincare’ (Nancy, France) in 2002 in a joint PhD program Italy-France. His research focuses on access control policies, legal compliance checking, and the design of authentication and authorization solutions.

The Research Environment: The student will work both in the Security & Trust (S&T) research unit of FBK and Trentino Network (TNet) in Trento, Italy. S&T (http://st.fbk.eu/) conducts research in Cyber Security, focusing on Identity and Access Management, Compliance, Cloud and Mobile security. TNet (http://www.trentinonetwork.it/) deployed a broadband (optical and wireless) network throughout Trentino and is the provider of networking services to the various local Public Administrations and telecommunication operators in the Trentino area.

Link to personal home page: https://st.fbk.eu/SilvioRanise


[1] P. Guarda, S. Ranise, H. Siswantoro. Security Analysis and Legal Compliance Checking for the Design of Privacy-friendly Information Systems. Technical report, 2017.

[2] G. Danezis, J. Domingo-Ferrer, M. Hansen, J.-H. Hoepman, D. L. Metayer, R. Tirtea, and S. Schiffner. Privacy and data protection by design—from policy to engineering. Report ENISA, 2014.

Security Testing Procedures for Mobile Applications (Funded by FBK)

Proposer(s): Roberto Carbone

Research area(s): Computer Security

Curriculum: Secure and Reliable Systems


Several vulnerabilities in mobile applications have been reported in the last few years [1,2,3]. The reasons are manifold. For instance, for identity management, the trend is to adapt solutions originally designed to work in a traditional web scenario in the mobile context, without taking into account the peculiarities of mobile [4].
In this context, it is extremely important to support app developers, allowing them to test their applications in order to spot security issues. To this purpose, the research concerns the definition of a security testing procedure for mobile applications. This activity includes (i) the analysis of the existing security testing techniques for mobile applications (ii) the specification of a (semi-)automatic approach for security testing of mobile applications, and (iii) the implementation of a tool that automatically executes test cases for mobile apps.

Advisor: Dr. Roberto Carbone is a researcher of the Security & Trust Research Unit of Bruno Kessler Foundation (FBK-ICT) in Trento, since November 2010. He obtained the MSc degree in Computer Engineering at the University of Genova in 2005 and received his Ph.D. from the same University in 2009. His PhD Thesis, titled “LTL Model-Checking for Security Protocols”, has been awarded the CLUSIT prize 2010 by the Italian Association for Information Security. His research mainly focuses on the formal analysis of security protocols and services, and identity management solutions.

The Research Environment: The student will be situated at FBK in Trento, Italy, within the Security & Trust (ST) research unit. FBK-ST (https://st.fbk.eu) develops cutting-edge security solutions for web-based authentication and authorisation, mobile, and cloud-based and service-oriented applications and infrastructures. The team consists of approximately 12 researchers, spread among senior members and PhD students.

Link to personal home page: https://st.fbk.eu/RobertoCarbone


[1] H. Wang, Y. Zhang, J. Li, H. Liu, W. Yang, B. Li, D. Gu. Vulnerability Assessment of OAuth Implementations in Android Applications. In Proceedings of the 31st Annual Computer Security Applications Conference (ACSAC 2015). ACM, New York, NY, USA, 61-70. DOI: https://doi.org/10.1145/2818000.2818024

[2] Y. Chen, T. Li, X. Wang, K. Chen, X. Han. Perplexed Messengers from the Cloud: Automated Security Analysis of Push-Messaging Integrations. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS '15). ACM, New York, NY, USA, 1260-1272. DOI: https://doi.org/10.1145/2810103.2813652

[3] NIST. Mobile Threat Catalogue. URL: https://pages.nist.gov/mobile-threat-catalogue/

[4] G. Sciarretta, A. Armando, R. Carbone, S. Ranise. Security of Mobile Single Sign-On: A Rational Reconstruction of Facebook Login Solution. In Proceedings of the 13th International Joint Conference on e-Business and Telecommunications - Volume 4: SECRYPT, 147-158, 2016, Lisbon, Portugal.


Security Testing (Funded by FBK)
Proposer(s): Mariano Ceccato
Research area(s): Software Engineering
Curriculum: Secure and Reliable Systems


When engineering secure software systems and services, software testing is one of the main practices to detect faults as well as security vulnerabilities. Security testing (also called penetration testing) is a branch of software testing devoted to stress programs with respect to their security features, with the aim of identifying vulnerabilities. The aspect of security testing that will be considered for investigation during the PhD include: generating input values (referred to as test payloads), intended to exercise vulnerabilities; evaluating whether such payloads manage to expose an actual vulnerability, i.e., the security oracle.
Security testing is highly expensive given the complexity of modern systems, typically providing a wide range of services, and the sophistication of attacks and exploitations. To reduce effort and cost, the focus will be on achieving a high level of automation in security testing.

Advisor: Mariano Ceccato is a tenured researcher in FBK (Fondazione Bruno Kessler) in Trento, Italy. He received the master degree in Software Engineering from the University of Padova, Italy, in 2003 and the PhD in Computer Science from the University of Trento in 2006 under the supervision of Paolo Tonella, with the thesis "Migrating Object Oriented code to Aspect Oriented Programming". His research interests include security testing, migration of legacy systems, aspect oriented programming and empirical studies. He was program co-chair of the 12th IEEE Working Conference of Source Code Analysis and Manipulation (SCAM 2012) that was held in Riva del Garda, Italy.
The Research Environment: The student will be situated at FBK in Trento, Italy, within the Software Engineering (SE) research unit. FBK-SE (https://se.fbk.eu) carries out research in requirements engineering, code analysis and testing. The team consists of approximately 15 researchers, spread among senior members, postdocs and PhD students.

Link to personal home page:

Model-based safety assessment for hybrid systems.

Proposer(s): Marco Bozzano, Alessandro Cimatti, Stefano Tonetta

Research area(s):
Embedded Systems, Model-Based Safety Assessment, Formal Verification, Model Checking

Curriculum: Secure and Reliable Systems

Model-based safety assessment is a growing research area in the design of complex critical systems. Automated tools are used to analyze system correctness and reliability, and to support the certification by means of the automated construction of artifacts such as Fault Trees and FMEA tables. Objective of the study is to lift MBSA techniques from finite-state systems to the case of hybrid systems that include continuous time and complex dynamics. The studies will follow three related directions: model extension, i.e. generation of models encompassing faulty behaviors from nominal models; engines for the verification and parameter synthesis based on SMT and IC3; contract-based analysis exploiting the system architecture. The studies will be carried out as part of COMPASS, a framework funded by the European Space Agency for the design of complex space systems.

Link to the group or personal webpage:





[1] M. Bozzano, A. Cimatti, A.F. Pires, D. Jones, G. Kimberly, T. Petri, R. Robinson and S. Tonetta. Formal Design and Safety Analysis of AIR6110 Wheel Brake System. In Proceedings of CAV 2015.

[2] M. Bozzano, A. Cimatti, Alberto Griggio and Cristian Mattarei. Efficient Anytime Techniques for Model-Based Safety Analysis. In Proceedings of CAV 2015.

[3] Alessandro Cimatti, Alberto Griggio, Sergio Mover, Stefano Tonetta: HyComp: An SMT-Based Model Checker for Hybrid Systems. In Proceedings of TACAS 2015.

[4] M. Bozzano, A. Cimatti, J.-P. Katoen, P. Katsaros, K. Mokos, V.Y. Nguyen , T. Noll, B. Postma and M. Roveri. Spacecraft Early Design Validation using Formal Methods. Reliability Engineering & System Safety 132:20-35. December 2014.


Model-based safety assessment for hybrid systems (Funded by FBK)

Proposer(s):  Marco Bozzano, Alessandro Cimatti, Stefano Tonetta

Research area(s):
Embedded Systems, Model-Based Safety Assessment, Formal Verification, Model Checking

Curriculum: Secure and Reliable Systems


Model-based safety assessment is a growing research area in the design of complex critical systems. Automated tools are used to analyze system correctness and reliability, and to support the certification by means of the automated construction of artifacts such as Fault Trees and FMEA tables. Objective of the study is to lift MBSA techniques from finite-state systems to the case of hybrid systems that include continuous time and complex dynamics.  The studies will follow three related directions: model extension, i.e. generation of models encompassing faulty behaviors from nominal models; engines for the verification and parameter synthesis based on SMT and IC3; contract-based analysis exploiting the system architecture. The studies will be carried out as part of COMPASS, a framework funded by the European Space Agency for the design of complex space systems.

Link to the group or personal webpage:




[1] M. Bozzano, A. Cimatti, A.F. Pires, D. Jones, G. Kimberly, T. Petri, R. Robinson and S. Tonetta. Formal Design and Safety Analysis of AIR6110 Wheel Brake System. In Proceedings of CAV 2015.

[2] M. Bozzano, A. Cimatti, Alberto Griggio and Cristian Mattarei. Efficient Anytime Techniques for Model-Based Safety Analysis. In Proceedings of CAV 2015.

[3] Alessandro Cimatti, Alberto Griggio, Sergio Mover, Stefano Tonetta: HyComp: An SMT-Based Model Checker for Hybrid Systems. In Proceedings of TACAS 2015.

[4] M. Bozzano, A. Cimatti, J.-P. Katoen, P. Katsaros, K. Mokos, V.Y. Nguyen , T. Noll, B. Postma and M. Roveri. Spacecraft Early Design Validation using Formal Methods. Reliability Engineering & System Safety 132:20-35. December 2014.

 Systems Engineering


Hierarchical and decentralized control of distributed energy systems.
Proposers: Riccardo Minciardi, Michela Robba
Curriculum: Systems Engineering

Research areas:

Optimization, smart grids, distributed control, hierarchical control, optimal control, model predictive control, stochastic optimization, microgrids.

Description: The development of the renewable energy sector, the concept of sustainable energy, and the use of technologies for distributed generation have focused attention on smart grids. Microgrid research (i.e. planning and management of single microgrids or their coordination) fits very well with ongoing smart grid activities throughout the world and several challenges need to be investigated. Microgrids are able to integrate different distributed and heterogeneous sources, either programmable or stochastic (these latter, typically, are the renewables like wind and solar), and require intelligent management methods and efficient design in order to meet the needs of the area they are located in. Generally, microgrids are low voltage distribution networks installed in small areas (like University Campus sites or districts), but also buildings or industrial plants can themselves be seen as microgrids. Energy Management Systems (EMSs) are vital tools used to optimally operate and schedule microgrids. The proposed PhD research activity will fall within this framework. In particular, the following main objectives/activities can be listed:

  1. Definition and development of a general EMS for polygeneration grid-connected microgrids able to represent small districts or areas or general buildings.
  2. Definition and development of hierarchical control architectures for islanded microgrids with specific reference to primary, secondary and tertiary control, taking into account frequency and voltage control.
  3. Models and methods for the coordination of microgrids (that include renewables, storage systems, electrical vehicles, etc.) that can work in islanded and grid-connected modes.

Demonstration activities at the Savona Campus research infrastructures are foreseen to test the developed models.


Link to personal homepage



  1. Bidram, and A. Davoudi,” Hierarchical Structure of Microgrids Control System”, IEEE Transactions on Smart Grid, vol. 3, no. 4, pp. 1963-1976, 2012.
  2. Kinani Bejestani, A. Annaswamy, and T. Samad, “A hierarchical transactive control architecture for renewables integration in smart grids: analytical modeling and stability,” IEEE Transactions on Smart Grid, vol. 5, pp. 2054-2065, 2014.


Energy Efficiency in Buildings
Models, Methods and Wireless Technology for Power and Energy Optimal Control

Proposer  M. Robba

Research area(s):
Energy efficiency, wireless sensors, temperature control, demand response, optimal control, optimization, simulation.

Curriculum: Systems Engineering

Buildings account for 20–40% of the total final energy consumption and its amount has been increasing at a rate 0.5–5% per annum in developed countries. Recent research shows that 20%–30% of building energy consumption can be saved through optimized operation and management without changing the building structure and the hardware configuration of the energy supply system. Therefore, there is a huge potential for building energy savings through efficient operation. Energy Management Systems (EMSs) and Building Automation Systems (BAS) should be efficient and reliable to schedule production technologies and devices and to guarantee comfort in terms of temperature and humidity. Moreover, it is necessary to design cost-effective EMSs and BAS that can be supported for large scale implementation. The proposed PhD theme falls in this framework and aims at the definition of a wireless-based architecture for the monitoring and management of buildings. Specifically, the work should address the following research challenges:

1) Design and implementation of a monitoring system based on wireless sensors in a building;

2) Definition and implementation of simulation models;

3) Formalization and implementation of agent-based optimization models for distributed control;

4) Definition of algorithms for demand response and resources scheduling;

5) Integration between EMSs for polygeneration microgrids and for buildings management. 

Link to the group or personal webpage:  http://www.dibris.unige.it/robba-michela

M. Avci, M. Erkoc, A. Rahmani, and S. Asfour, “Model predictive HVAC load control in buildings using real-time electricity pricing,” Energy Buildings, vol. 60, pp. 199–209, May 2013.

P.-D. Morosan, R. Bourdais, D. Dumur, and J. Buisson, “Distributed model predictive control based on benders’ decomposition applied to multisource multizone building temperature regulation,” in Proc. IEEE CDC, pp. 3914–3919, 2010.


Smart scheduling approaches for manufacturing industry.

Proposers: Massimo Paolucci

Research area(s):
Manufacturing Production Scheduling, Metaheuristics, Optimization, Industry 4.0

Curriculum: Systems Engineering


Scheduling in manufacturing industry involves key decisions about how to exploit at best the available resources (e.g., machines, tool, workers, energy) in order to efficiently perform the required production activities. Scheduling decisions are at the operational level, that is, they regard a short time planning horizon (a day or shift) and must take into account detailed production conditions and requirements. In real manufacturing industries scheduling problems are at a large scale (the number of activities to be performed may be huge and workshops may include many machines and tools) so that the number of possible alternative decisions usually grows exponentially. In addition, even if scheduling problems share common features, several relevant differences exist which characterize different industrial sector (e.g., food and beverage, fashion, automotive). Therefore an effective general purpose solution approach that could represent the basis for developing scheduling systems for different sectors, avoiding to restart from scratch with a specific algorithm, seems not available. Finally, the introduction of the Industry 4.0 paradigm will allow to rely on fresh data from the field, so improving the possibility of planning, adapting and revising the scheduling decisions more effectively, even reacting to the unpredicted changes that usually characterize the real production systems.

The purpose of this research project is to design a new solution approach for facing a large class of the scheduling problems emerging in manufacturing industry. Such an approach can be based on several building block and strategies (recent metaheuristics as adaptive large neighborhood search or bio-inspired algorithms, simulation-optimization as well as heuristics based on mathematical programming) that can be exploited to design a solver framework for this class of hard combinatorial problems.


Link to the group or personal webpage:



Modeling, dynamic assignment and optimal control of traffic flows on road networks

Proposer: Simona Sacone (This email address is being protected from spambots. You need JavaScript enabled to view it.)

Research area: Transportation, Logistics

Curriculum: Systems Engineering


The definition of innovative methods for modeling, forecasting and regulating traffic flows on road networks is the objective of the present research theme. The possibility of enhancing the road efficiency by effectively exploiting the existing transportation capacity and the need of taking care to such crucial aspect as safety, energy consumption and environmental pollution are the two key aspects of the research project. Then, a major innovative feature of the proposed work stands in the twofold aim that all phases of the research activity will have to consider, that is, both the objective of increasing the traffic system performance and the objective of explicitly taking into account the impact of the road network on the surrounding environment and on the citizens’ safety and quality of life.The research work will deal on innovative modeling and forecasting tools, methods and algorithms for dynamic traffic assignment, development of control schemes for traffic regulation (aimed at reducing traffic congestion and minimizing the environmental impact of the road network).

Link to the group or personal webpage:



  1. Ferrara, S. Sacone, S. Siri, "Design of networked freeway traffic controllers based on event- triggered control concepts", International Journal of Robust and Nonlinear Control, 26 (6), pp. 1162–1183, 2016.
  2. Pasquale, I. Papamichail, C. Roncoli, S. Sacone, S. Siri, M. Papageorgiou, "Two-class freeway traffic regulation to reduce congestion and emissions via nonlinear optimal control", Transportation Research C, 55, pp. 85–99, 2015.
  3. Caballini, C. Pasquale, S. Sacone, S. Siri, "An event-triggered receding-horizon scheme for planning rail operations in maritime terminals", IEEE Transactions on Intelligent Transportation Systems, 15 (1), pp. 365–376, 2014. 

  4. Sacone, S. Siri, "Optimal Vendor-Managed Inventory policies for distribution systems with limited and capacitated vehicles", IEEE Transactions on Automation Science and Engineering, 11(3), pp. 948–953, 2014. 



Analysis of Genic Expression via Game Theory and Graphs

Proposer: Marcello Sanguineti
Research area(s): computational biology, game theory, graphs and networks, optimization

Description Recent research (e.g., Albino et al. (2008)) have emphasized the possibility of applying Game Theory (Peters (2008) to the analysis of medical results obtained via the so-called “microarray techniques”. Such techniques allow one to "take a picture” the expression of thousands of genes in a cell by means of a single experiments. The departure point is the study of the genetic expression in a sample of cells that satisfy particular biological conditions: for instance, cells belonging to a cancer-affected subject. Game Theory plays a basic role in defining the “microarray games” (Schena et al. (1995)) and in evaluating the relevance of each gene in influencing or even determining a pathology, by taking into account the interactions with other genes. To this end, in the literature (see, e.g., Moretti et al. (2007)) it has been investigated the use of some “power indices” from Game Theory (such as the Shapley and Banzhaf values) to estimate gene relevance. In particular, an in-depth analysis has been performed in relationship with colon cancer and neuroblastic tumors.

The aim of this research project consists in studying the use of Game Theory in the analysis of genic expression data and contributing to a better understanding of some originating factors of cancer. As a first step, the already-available studies based on the Shapley and Banzhaf values in microarray games will be further developed and tested on case-studies made available the research group of Prof. Alberto Ballestrero at IRCCS at S. Martino Hospital in Genova, with whom we have already established a joint research plan. The partner team from S. Martino has access to databases made up of some hundreds of microarray experiments and RNA-sequencing, made available by the Cancer Genome Atlas Network (TCGA). As a second step, the research aims at considering other power indices, such as the “τ-value”. The third phase will be devoted to a comparison among the various indices, both from theoretical and experimental viewpoints, in such a way to evaluate which are the better-suited in the study of cancer. Finally, the combination of tools from Game Theory with graph-based approaches, which allow one to model interactions between pairs of genes, will be examined.

Link to the group or personal webpage:



-Albino D., Scaruffi P., Moretti S., Coco S., Di Cristofano C., Cavazzana A., et al. . Identification of low intratumoral gene expression heterogeneity in neuroblastic tumors by wide-genome analysis and game theory. Cancer 113(6):1412-1422, 2008.

-Moretti S., Patrone F., Bonassi S. The class of microarray games and the relevance index for genes. Top 15:256-280, 2007.

-Peters H. Game Theory. A Multi-Leveled Approach, Springer, 2008.

-Schena M., Shalon D., Davis W.R., Brown P.O. Quantitative Monitoring of Gene Expression Patterns with a Complementary DNA Microarray. Science 270:467-470, 1995.

Transportation Network Optimization Via Transferable-Utility Games

Proposer: Marcello Sanguineti
Research area(s): network optimization, game theory, centrality measures


Network connectivity is an important aspect of any transportation network, as the role of the network is to provide the society with the ability to easily travel from point to point using various modes. Analyzing networks' connectivity can assist the decision makers with the identification of weak components, to detect and prevent failures, and to improve the connectivity in terms of reduced travel time, reduced costs, increase reliability, easy access, etc.. 

A basic question in network analysis is: how “important” is each node? An important node might, e.g., highly contribute to short connections between many pairs of nodes, handle a large amount of the traffic, generate relevant information, represent a bridge between two areas, etc. To quantify the relative importance of nodes, one possible approach consists in using the concept of “centrality” [1, Chapter 10]. A limitation of classical centrality measures is the fact that they evaluate nodes based on their individual contributions to the functioning of the network. For instance, the importance of a stop in a transportation network can be computed as the difference between the full network capacity and the capacity when the stop is closed.

However, such an approach is inadequate when, for instance, multiple stops can be closed simultaneously. As a consequence, one needs to refine the existing centrality measures, in such a way to take into account that the network nodes do not act merely as individual entities, but as members of groups of nodes. To this end, one can exploit game theory [2], which, in general terms, provides a basis to develop a systematic study of the relationship between rules, actions, choices, and outcomes in situations that can be either competitive or non-competitive. The idea at the roots of game-theoretic centrality measures [3] is the following: the nodes are considered as players in a cooperative game, where the value of each coalition of nodes is determined by certain graph-theoretic properties. The key advantage of this approach is that nodes are ranked not only according to their individual roles in the network, but also taking into account how they contribute to the roles of all possible groups of nodes. This is important in various applications in which a group's performance cannot be simply described as the sum of the individual performances of the group members involved. In the case of transportation networks, suppose we have at our disposal a certain budget. One possible approach consists in addressing the question of whether investing all the money in increasing the capacity and/or service of a transportation component (road section, bridge, transit route, bus stop, etc.) substantially improves the whole network. A better way of proceeding for the network analyst/designer would probably consist in considering to simultaneously improve a (possibly small) subset of the components. In this case, to evaluate the importance of a component one has to take into account the potential gain of improving one component as a part of a group of components, not merely the potential gain of improving the component alone. This approach can be formalized in terms of cooperative game theory [2], where the nodes are players whose performances are studied in coalitions, i.e., subsets of players.
This research project, which takes the hint from the works [4,5], consists in developing methods and tools from a particular type of cooperative games, called “cooperative games with transferable utility”, for brevity “TU games”, to optimize transportation networks. Given a transportation network a TU game will be defined, which takes into account the network topology, the weights associated with the arcs, and the demand based on the origin-destination matrix (weights associated with nodes). The nodes of the network represent the players of the TU game.

We aim at exploiting game-theoretic solution concepts developed during decades of research to identify the nodes that play a major role in the network. In particular, we shall use the so-called solution concept known as Shapley value [2], which represents a criterion according to which each node is attributed a value, in such a way that the larger the value the larger the node importance. The Shapley value enjoys mathematical properties well-suited to the proposed analysis. Computational aspects related to the evaluation of the Shapley value will be investigated, too [6], studying the possibility of polynomial-time computation with respect to the network dimension.
Depending on whether the analysis focuses on the “physical nodes” or the “physical links”, the definition of the player changes. This research project considers both. When the transportation nodes (representing, e.g., intersections, transit terminals, bus stops, major points of interest, etc.) will be analyzed, the network on which the TU game will be defined is identical to the physical network. On the other hand, when arcs (e.g., road segments, transit routes, rail lines, etc.) will be analyzed, the network will be transformed in such a way that the physical links are modeled as nodes.

Link to the group or personal webpage: http://www.dist.unige.it/msanguineti/

[1] S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications. Vol. 8. Cambridge University Press, 1994.
[2] J. González-Díaz, I. García-Jurado, and M.G. Fiestras-Janeiro, An Introductory Course on Mathematical Game Theory. AMS, 2010.
[3] T.P. Michalak, Game-Theoretic Network Centrality - New Centrality Measures Based on Cooperative Game Theory, 2016. Available from: http://game-theoretic-centrality.com/index.html.
[4] Y . Hadas and, M. Sanguineti, An Approach to Transportation Network Analysis Via Transferable-Utility Games. 96th Annual Meeting of the Transportation Research Board, Transportation Research Board of the National Academies, Washington, DC, 8-12 gennaio 2017.
[5] Y. Hadas, G. Gnecco, M. Sanguineti, An Approach to Transportation Network Analysis Via Transferable Utility Games. Submitted, 2017.

New planning approaches for logistics in the Physical Internet perspective

Curriculum:  System Engineering

Proposer(s): Simona Sacone, Massimo Paolucci

Research area(s):
Logistics systems, Physical Internet, Planning, Optimization


The idea of Physical Internet (PI) is envisioned to completely change the way of producing and transporting goods around the planet. PI would mimic the way information is packaged, distributed and stored in the virtual world to improve real world logistics. Accordingly, representing the virtual data transmission, freight travels from hub to hub in an open network rather than from origin to destination directly. Cargo is routed automatically and, at each segment, it is bundled for efficiency. This indeed requires the building of a new network topology and the assessment of the benefits it could generate in terms of carbon footprint, throughput times, cost reductions, including the socio-economic aspects.

The purpose of this research project is to identify and analyze the new classes of decision problems emerging from the introduction of the PI perspective in logistics, focusing on the optimization of planning logistic operations. In particular, a general intermodal transportation scenario will be considered, corresponding to a logistic network that includes road and railway hubs, as well as exchange centers and warehouses. Particular attention will be dedicated to the concepts of resiliency and vulnerability of the network and how these aspects should be embedded in the problems’ formulation. Both network design and optimization problems will be considered, taking in specific account key performance indicators relevant to transportation efficiency, human resource management and the overall system resilience.

Link to the group or personal webpage:



Green logistics: the electrical vehicle routing problem in smart grids.

Curriculum: System Engineering

Proposer(s): Michela Robba, Massimo Paolucci

Research area(s):
Green vehicle routing, electrical vehicle routing, Metaheuristics, Optimization


The international policies on sustainable development and reduction of greenhouse gas emissions have led to an increase of the use of renewable energies and to the development of clean technologies. The increase of energy production from renewables and distributed generation have led to the necessity of changing the current grid management in a more flexible tool that integrates and coordinates microgrids, local areas, active buildings and electrical vehicles (EVs). Vehicle Routing Problems (VRPs) deal with fundamental operational decisions in logistics as they basically require to assign the transportation demands to a fleet of vehicles in order to optimize one or more specified objectives (e.g., the total vehicle travel distance), taking into account several operational conditions. In the last years, the increasing interest in sustainability lead to the definition of the Green VRP (GVRP) and then the Electrical VRP (EVRP) to take into account the additional challenges associated with operating a fleet of alternative fuels vehicles. In such problems, in particular, the possibility of additional stops for recharging exist. Further aspects increasing the difficulty in solving such problems are the limited availability of recharging facilities and the significant time needed for recharging.

In this research project, since a smart grid is assumed for the service area, an extension of the EVRP is considered that accounts also for the impact of vehicle recharging on the energy management of the grid, as well as the possibility of using electrical vehicles as distributed energy resources able to provide power supply in periods of greatest load.

Link to the group or personal webpage:




You are here: Home How to Apply Research Topics Specific research projects