PhD Program in  Computer Science and Systems Engineering 

Research Projects Proposals 2020 (XXXVI Cycle)

 


Research line: Data Science and Engineering

Title: Efficient  algorithms for large scale structured machine learning (Funded by ERC consolidator)
Proposer: Lorenzo Rosasco
Curriculum: Computer Science
 
Short description: The projects will aim at developing theoretical and algorithmic ideas to explain the success of current systems and suggest the development of novel practical and efficient solutions. Candidates must have strong mathematical and computational skills. 
Topics of interest include  but are not limited to: deterministic and random projections/ sketching,  optimization methods for non-smooth/non convex problems (stochastic, accelerated, distributed , parallel methods), data with geometric structure (graph, string, permutations, manifolds) as well as time structure (dynamical systems). While the emphasis is on methodological and computational aspects, the candidates will have the opportunity to work  in close collaborations on a number of application, including high energy physics data, robotics, time series prediction. 

Title: Ethic-by-Design Query Processing / Responsible Query Processing

Proposer: Barbara Catania

Research area: Data Science and Engineering
Curriculum: Computer Science

Description: Nowadays, large-scale technologies for the management and the analysis of big data have a relevant and positive impact: they can improve people’s lives, accelerate scientific discovery and innovation, and bring about positive societal change. At the same time, it becomes increasingly important to understand the nature of these impacts at the social level and to take responsibility for them, especially when they deal with human-related data.

Properties like diversity, serendipity, fairness, or coverage have been recently studied at the level of some specific data processing systems, like recommendation systems, as additional dimensions that complement basic accuracy measures with the goal of improving user satisfaction [2].

Due to the above-mentioned social relevance and to the fact that ethical need to take responsibility is also made mandatory by the recent General Data Protection Regulation of the European Union [GDPR16], nowadays the development of solutions satisfying - by design - non-discriminating requirements is currently one of the main challenges in data processing and is becoming increasingly crucial when dealing with any data processing stage, including data management stages [1, 3, 4].   

Based on our past experience in advanced query processing for both stored and streaming data, the aim of the proposed research is to design, implement, and evaluate ad hoc query processing techniques for stored and stream data to automatically enforce specific beyond-accuracy properties, with a special reference to diversity. The focus will be on compositional techniques: property satisfaction will be preserved in any more complex query workflow, possibly iteratively combining together several query processing steps.

Link to the group or personal webpage: dama.dibris.unige.it

References:

[1] S Abiteboul et Al. Research Directions for Principles of Data Management (Dagstuhl Perspectives Workshop 16151). Dagstuhl Manifestos 7(1): 1-29 (2018)

[2] M Kaminskas, D Bridge. Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems. TiiS 7(1): 2:1-2:42 (2017) 

[3] J Stoyanovich, B Howe, H.V. Jagadish. Special Session: A Technical Research Agenda in Data Ethics and Responsible Data Management. SIGMOD Conf. 2018: 1635-1636 (2018)

[4] J Stoyanovich, K Yang, H.V. Jagadish. Online Set Selection with Fairness and Diversity Constraints. EDBT 2018: 241-252 (2018)


Assessing similarity: the role of embeddings in schema/ontology matching and in query relaxation

Proposer(s):  Giovanna Guerrini

Research area(s): Data Science and Engineering

Curriculum: Computer Science

Description:
A large number of applications need to be able to assess similarity between concepts that are represented by words, possibly bound by hierarchical structures. Word embeddings are used for many natural language processing (NLP) tasks thanks to their ability to capture the semantic relations between words by embedding them in a metric space with lower dimensionality [1]. Word embeddings have been mostly used to solve traditional NLP problems, such as question answering, textual entailment, and sentiment analysis. 

Recently, embeddings emerged as a new way of encapsulating hierarchical information [2]. Specifically, hyperbolic embeddings lie in hyperbolic spaces, which are suitable to represent the hierarchical structure and maintain distances among elements. The idea behind hyperbolic embeddings is very simple: forcing elements with semantic correlations to be closest to each other in the embedding space.

The aim of this research theme is to exploit this way to assess similarity to more efficiently and accurately establish mappings between schemas and align ontologies [3], which are crucial tasks for information integration. The ability of efficiently establish similar/corresponding terms can also be exploited in relaxed query processing [4].

Finally, the possibility of exploit embeddings also for assessing many-faceted similarity for concepts described by a word but also positioned in space, as in the case of geo-terms, can be investigated [5].

Link to the group or personal webpage:

dama.dibris.unige.it

References:

[1] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. ICLR Workshop, 2013.

[2] M. Nickel and D. Kiela. Poincaré embeddings for learning hierarchical representations. NIPS 2017.

[3] P. Shvaiko & J. Euzenat. Ontology matching: state of the art and future challenges. IEEE Transactions on knowledge and data engineering, 25(1), 158-176, 2013.

[4] Barbara Catania, Giovanna Guerrini. Adaptively Approximate Techniques in Distributed Architectures. SOFSEM 2015: 65-77

[5] K. Beard. A semantic web based gazetteer model for VGI.  ACM SIGSPATIAL Workshop on Crowdsourced and Volunteered Geographic Information, 2012.


Title: Joint segmentation, detection, tracking in video sequences for efficient and effective scene understanding

Proposer: Francesca Odone
Curriculum: Computer Science

Description: Intrinsically of a different nature, image segmentation and object detection address two different questions with the common goal of understanding the content of a scene. Image segmentation is often seen as a lower level tasks. In recent years we assisted to the development of semantic segmentation approaches, where we also associate semantic labels with pixels, super-pixels, or image regions. In this way we obtain an overall understanding of the scene, as image regions may contain objects as well as background areas. Semantic segmentation methods are providing today good performances at the price of being computationally demanding,  for this reason  in video analysis they are usually applied at a lower rate and propagated on the following frames. If the camera is moving propagation requires motion estimation. Conversely detection is an efficient, higher level task, that applies to entities associated with a defined spatial extent (objects);  object detection is usually less accurate in terms of localisation (it provides bounding boxes). For its higher level nature it is easier  to extend it to video analysis through the use of state of the art tracking or prediction algorithms. 

We will explore different ways of combining these complementary sources of information, trying to achieve a good compromise between efficiency and effectiveness. Our focus will be primarily on the analysis of video sequences acquired by moving cameras, there included applications related with robotics and automation and autonomous guidance.


Title: Machine learning for prognostic maintenance

Proposers: Stefano Rovetta, Francesco Masulli
Curriculum:
Computer Science

Description: 
Predictive maintenance is widely acknowledged as the "killer application" of machine learning in Industry 4.0.  This research activity will develop machine learning methods for prognostic maintenance, an approach that aims not only at predicting the future maintenance necessities, but also at describing causes and effects of future evolutions of a system: "foresight," as opposed to "forecast."

The  activity will be carried on in collaboration with a software company that already markets a more traditional solution for predictive maintenance. Therefore, the work will build on an existing, substantial body of tools and know-how. The candidate is expected to develop competences that are of great technical, industrial, ans well as commercial, interest.

Link to the group or personal webpage: https://www.dibris.unige.it/rovetta-stefano

References:

[1] Vogl, G.W., Weiss, B.A. & Helu, M. "A review of diagnostic and prognostic capabilities and best practices for manufacturing." J Intell Manuf (2019) 30: 79.


Title: Smart request processing for personalised data space-user interactions through approximation and learning

Proposers: Barbara Catania, Giovanna Guerrini
Curriculum: Computer Science

Description:
The increase of data size and complexity requires a deep revisiting of user-data interactions and a reconsideration of the notion of query itself. A huge number of applications need user-data interactions that emphasize user context and interactivity with the goal of facilitating interpretation, retrieval, and assimilation of information [1]. The ability to learn from observations and interactions [2] as well as to approximate process requests [3,4] are two key ingredients in these new settings.

The aim of this research theme is to devise smart innovative approaches for exploiting, from a processing viewpoint and a focus on graph-shaped data, the role of user context (geo-location, interests, needs) and of similar requests repeated over time to inform approximation and refine knowledge on underlying data, which in turn can be used to more efficiently and effectively fulfill information needs. Preliminary approaches in this direction have been proposed in [5].

Link to the group or personal webpage: dama.dibris.unige.it

References:

[1] Georgia Koutrika. Databases & People: Time to Move on From Baby Talk. EDBT/ICDT ‘18

[2] Yongjoo Park, Ahmad Shahab Tajik, Michael Cafarella, and Barzan Mozafari. Database Learning: Toward a Database that Becomes Smarter Every Time. ACM SIGMOD ‘17

[3] Peter Haas. Some Challenges in Approximate Query Processing. EDBT/ICDT ‘2018

[4] Barbara Catania, Giovanna Guerrini. Adaptively Approximate Techniques in Distributed Architectures. SOFSEM ‘15

[5] Barbara Catania, Francesco De Fino, Giovanna Guerrini. Recurring Retrieval Needs in Diverse and Dynamic Dataspaces: Issues and Reference Framework. EDBT/ICDT Workshops 2017


Title: Making motion analysis computationally efficient

Proposers: Nicoletta Noceti
Curriculum: Computer Science 

Short Description: Motion analysis is one of the main elements in Computer Vision, at the basis of a variety of higher-level tasks, as activity recognition and behavior understanding. The earliest task of motion analysis is always the ability to identify the moving regions in image streams, often referred to as motion-based image (and video) segmentation.  Two main scenarios can be identified:

  • When the videos are acquired with a still camera, the task can be formulated as a pixel-based classification (moving or still?)
  • When there is no prior information on the setting of acquisition, the problem of motion-based image segmentation is intertwined with motion estimation.

In these fields, most of the work in the recent decades has a focus on accuracy of results; only a marginal share of the literature deals with performance, intended as execution time of the algorithms, and efficiency, intended as power consumption of the computing device running a specific algorithm. The two latter aspects, however, play a critical role in real-world visual applications, due to increasing frame resolution/throughput requirements and increasing demand for stand-alone battery-powered autonomous device applications (e.g. robots).

We will therefore investigate the possibility of leveraging modern highly-parallel computer architectures (e.g. many-core processors) and throughput-oriented devices (e.g. GPUs) along with power-constrained platforms (e.g. FPGAs or low-power computers), using state-of-the-art numerical methods and code optimization techniques, in order to cope with the growing computational demands of motion analysis.  The goal of this project is to devise methods for motion-based image segmentation, with particular focus on background subtraction and optical flow estimation, specifically intended to be effective but also performing and efficient. 

References

  • Sundaram, N., Brox, T., & Keutzer, K. (2010). Dense point trajectories by GPU-accelerated large displacement optical flow. In European conference on computer vision(pp. 438-451). Springer, Berlin, Heidelberg.
  • Stagliano, A., Noceti, N., Verri, A., & Odone, F. (2015). Online space-variant background modeling with sparse coding. IEEE Transactions on Image Processing, 24(8), 2415-2428.  

Title: Machine learning for modeling networked systems

Proposer: Annalisa Barla
Curriculum: Computer Science

Description:
We are recruiting one PhD candidate, to work on machine learning methods that are suitable for:
- understanding of large amounts of textual data
- inferring of complex relational models
The methods should be designed to deal with large-scale temporal data (dynamical systems). These methods are gaining relevance in all those fields that aim at modeling underlying semantic laws of complex phenomena. 

In particular we have in mind two possible scenarios: (1) data driven web design, where the aim is to devise an optimal information architecture; (2) to the analysis of structured biomedical and clinical data, where the aim is to identify relational patterns that are predictive of a certain pathological condition.

The ideal candidate should have strong mathematical and computational skills and be interested in working on either NLP applications (topic modeling, text representation) or in modeling theory (block modeling, pattern-based community detection).   

Link to the group or personal webpage: ml.unige.it


 Title: Human-human and human-object interaction

Proposer: Francesca Odone, Nicoletta Noceti 

Curriculum: Computer Science

Research line: Data Science and Engineering

Topics: Computer Vision, Machine Learning, Deep Learning

Description: In many human-centered applications, it is important to study the interaction of a person with the surrounding environment or with other people. In this project we will consider smart environments as a reference application.

Building on ongoing research, where we are modeling the motion of a person by estimating and tracking his/her pose, as well as deriving complementary information (e.g. gaze direction), a first goal of this project is related to human-human interaction, with the aim to identify and analyze the interaction within small groups of people.  This analysis may involve different semantic granularities: identify social activities, detect joint attention, or study more specific cues of interaction or cooperation.

A second goal will be specifically devoted to human-object interaction. Here we will first address the problem of detecting contact points between a person and an object (a chair or a hand-held object for instance), initially considering RGBD streams but also exploring the possibility of restricting to monocular cameras; later we will model the type of interaction and draw possible connections with action recognition.

A challenge of both tasks is, to date, the limited availability of annotated datasets of appropriate size for state-of-the-art deep learning architectures. For this we will explore unsupervised, weakly supervised methods, and domain adaptation, including data generation.


Title: Learning long-term dependences from video streams 

Proposer: Nicoletta Noceti, Francesca Odone

Curriculum: Computer Science

Research line: Data Science and Engineering

Topics: Computer Vision, Machine Learning, Deep Learning

Description: The problem of motion analysis from videos has become a key element in many application domains, ranging from Human-Machine Interaction to Assisted Living. Although the significant advances of the last years, whereas in other domains deep learning techniques has gained momentum, the task remains among the most challenging, due to extreme variability of the dynamic information and its appearance. In this sense, different motion concepts – gestures, actions, activities – have been classically addressed with different solutions, although it is rather intuitive they may benefit from a more integrated approach enabling information sharing.

In the view of this project, a hierarchical relation can be considered between actions and activities, considering the latter as sequences of the first. Thus, the problem of detecting and recognizing actions, and modelling long-term dependences between them are keys to enable the understanding of more structured activities.

To this purpose we will consider in particular the following tasks:

  • Action detection on video stream. Although a large share of state-of-art approaches work on already trimmed videos, the ability to identify portions of time data referring to a single action is paramount for fully automatic motion recognition systems. This part may be based on an appropriate understanding of the atomic parts characterizing actions, aka motion primitives, and their evolution over time
  • Activity recognition, possibly with anticipation capabilities. Most recognition methods refer to short-term dependencies typical of actions, while activities are characterized by longer-term dependencies between actions. We will explore the use of Machine Learning, and in particular LSTM, to appropriately model this property.

Research line: Artificial Intelligence and Multiagent Systems 

Title: Automated Reasoning and/or Natural Language Processing for Evidence Analysis

Proposers: Viviana Mascardi

Curriculum: Computer Science

Description: Evidence analysis involves examining fragmented incomplete  knowledge and reasoning on it, in order to reconstruct plausible crime scenarios. The DigForASP COST Action involves the proposer in the role of Working Group Leader and aims at exploiting Automated Reasoning techniques and tools to analyse evidences, in a way that is explainable. In many cases, evidences are either described in textual documents written in natural language, or hidden in semi-structured data where short texts are involved (for example, telephone records including transcripts of calls and sms): their automated analysis cannot be carried out without a pre-processing phase based on Natural Language Processing.

The aim of this research is to exploit either Automated Reasoning or Natural Language Processing, or both if the candidate will possess both skills, to provide an automated support to evidence analysis. The research will be carried out within DigForASP and will take advantage of a collaboration with the "Tribunale di Genova", with which the proposer is currently collaborating for dissemination activities related with AI and Law.

Link to the group or personal webpage: https://www.dibris.unige.it/mascardi-viviana

References:
DigForASP COST Action (CA17124,https://digforasp.uca.es, funded for four years starting from 09/2018by the European Cooperation in Science and Technology)


Title: Hybrid deliberative/reactive robot architectures for task scheduling and joint task/motion planning 

Proposers: Marco Maratea (This email address is being protected from spambots. You need JavaScript enabled to view it.), Fulvio Mastrogiovanni (This email address is being protected from spambots. You need JavaScript enabled to view it.)

Curriculum: Computer Science

Short description: A longstanding goal of Artificial Intelligence (AI) is to design robots able to autonomously and reliably move in the environment and manipulate objects to achieve their goals. Autonomy and reliability can be considered still unsolved issues when robots operate in everyday environments, especially in the presence of humans. Traditionally, beside specific activities in perception, knowledge representation, action, and the mechanical structure of robots, an important research trend is related to the architecture robots may adopt to enforce autonomy and reliability, and specifically robustness and resilience to unexpected events, as well as uncertainty in perception and action outcomes.

This project aims at investigating, designing and prototyping robot architectures able to (i) interleave scheduling (i.e., the long-term definition of what a robot should do in the future) and task planning (i.e., which specific actions a robot should perform next), and (ii) integrate task and motion planning (i.e., what robot trajectories correspond to planned actions), in full perception-representation-reasoning-action loop.

On the one hand, the integration between scheduling and planning has not received sufficient attention in the literature, and only recently the issue has been studied, possibly relying on modeling through logical languages, e.g., PDDL or Answer Set Programming.

On the other hand, while discrete task planning has been considered mainly in the AI community, continuous motion planning has been the focus in much Robotics research. Such a separation leads to suboptimal robot behavior in real-world scenarios, especially in case of not modeled events, misperceptions or uncertainty in sensory data. However, in the past few years, a number of approaches have been discussed in the literature, which aim at integrating the discrete and the continuous planning process. The recent introduction of such planning formalisms as PDDL+ is a decisive step in this direction, and its effective use in Robotics architectures has not been fully explored yet.

The Ph.D. student will be involved in ongoing research activities in the application of advanced AI techniques to Robotics. In particular, the following topics will be considered:

  • Definition of expressive and computationally efficient knowledge representation approaches for robots.
  • Definition of innovative and efficient scheduling algorithms suitable for a robotic setting.
  • Representation of robot perceptions using logic formalisms able to ground further reasoning processes, i.e., knowledge revision, update, fusion.
  • Design and implementation of joint task/action planning strategies for robots. 

Link to the group/personal page: http://www.star.dist.unige.it/~marco/, https://www.dibris.unige.it/mastrogiovanni-fulvio


 Title: Inductive and Deductive Reasoning in Transportation

 Proposers: Davide Anguita (This email address is being protected from spambots. You need JavaScript enabled to view it.), Marco Maratea (This email address is being protected from spambots. You need JavaScript enabled to view it.)

 Macro-areas: Artificial Intelligence, Data Analysis

 Curriculum:Computer Science

Short description:Inference is defined as the act or process of deriving logical conclusions from premises known or assumed to be true. Deduction is an inference approach since it does not implies any risk. Once a rule is assumed to be true and the case is available there is no source of uncertainty in deriving, through deduction, the result. Inductive reasoning, instead, implies a certain level of uncertainty since we are inferring something that is not necessary true but probable. The inductive reasoning is the simple inference procedure that allows to increment our level of knowledge since the induction allows to infer something that is not possible to logically deduce just based on the premises. New generation of information systems collects and store a large amount of heterogeneous data which allow to induce Data Driven Models able to forecast their evolution. Deep, Multi-Task, Transfer, Semi-Supervised learning algorithms together with rigorous statistical inference procedures allow us to transform large and heterogeneous amounts of distributed and hard-to-interpret pieces of information in meaningful, easy to interpret and actionable information. Data Driven Models scale well with the amount of data available but they are not as effective if exploited for deduction purposes. On the contrary, Model Based Reasoning allows to model in an effective way complex systems based on the physical knowledge about them and deduce meaningful information by solving complex (optimization) problems. The general idea is to encode an application problem as a logical specification. This specification is subsequently evaluated by a general-purpose solver, whose answers can be mapped back to solutions of the initial application problem. The Model Based Reasoning limitation is that they may not scale well with the size of problem. The scope of this PhD proposal is to make inductive and deductive reasoning work together for the purpose of solving real world problems related to the transportation domain (e.g. Railway, Busses, and Airways). Transport of goods and people is a multifaceted problem since it involves technical constraints coming from the limited physical assets, safety constraints, social and cultural implication. In Europe the increasing volume of people and freight transported is congesting the transportation systems. The challenge of this research theme is twofold. From one side there is a necessity to exploit and further refine the state-of-the-art tools and basic research themes in the inductive and deductive fields in order to make induction and deduction work together and solve the respective limitations. From the other side there are plenty of real world problems in public transportation (e.g. multimodal transportation systems, train dispatching, combinatorial problems, and forecast problems) that needs the combination of different technological tools and techniques in order to be able to obtain satisfying results.

Link to the group/personal page: www.smartlab.ws,http://www.star.dist.unige.it/~marco/ 


Research line: Secure and Reliable Systems

Title: Runtime verification and monitoring with RML

Proposer: Davide Ancona

Research activity: Secure and Reliable Systems (SRS) (https://www.dibris.unige.it/en/29-dibris/ricerca/343-secure-and-reliable-systems-en)

Curriculum: Computer Science

Description:
Runtime verification (RV) [1,2] is an approach to verification consisting in dynamically checking that the event traces generated by single runs of a system under scrutiny are compliant with the formal specification of its expected correct behaviour.
RV is complimentary to other verification methods and integrates well with both formal verification [3] and software testing [4].
RML (Runtime Monitoring Language) [5,6,7] is an expressive Domain Specific Language for RV which favours
abstraction and simplicity, to better support reusability and portability of specifications and interoperability of the monitors generated from them.
The main aim of this research theme is to further study and advance RML; different separate directions could be considered, depending
on the skills and interests of the prospective PhD student.

- Theory: there exist several interesting unresolved theoretical issues concerning the formal semantics of RML, its decidability and expressive power.

- Language design and implementation: several directions for extending RML can be considered; different, but not necessarily mutually exclusive, aspects
may include:
  * adding support for data aggregation, and constraint checking over data series;
  * making RML a real rule engine to allow generated monitors not only to passively check events, but also to react with specific actions triggered by event matching/unmatching;
  * integrating RML with ontology matching, to manage event types in a more flexible way;
  * providing a better support to error reporting of monitors, and static checking of RML specifications, possibly integrated with an IDE, to favor language usability and specification development.

- Applications: assessment on the usability of RML calls for more practical and challenging experiments with RML in interesting application domains, including Internet of Things [8] and its related security issues.

References:
[1] Y. Falcone, K. Havelund, G. Reger, A Tutorial on Runtime Verification, in: Engineering Dependable Software Systems, 141–175, 2013.

[2] E. Bartocci, Y. Falcone, A. Francalanza, G. Reger, Introduction to Runtime Verification, in: Lectures on Runtime Verification - Introductory
and Advanced Topics, 1–33, 2018.

[3] W. Ahrendt, J. M. Chimento, G. J. Pace, G. Schneider, Verifying data- and control-oriented properties combining static and runtime verification: theory and tools, Formal Methods in System Design 51 (1) (2017) 200–265.

[4] M. Leotta, D. Clerissi, L. Franceschini, D. Olianas, D. Ancona, F. Ricca, M. Ribaudo,
Comparing Testing and Runtime Verification of IoT Systems: A Preliminary Evaluation based on a Case Study. ENASE 2019: 434-441

[5] https://rmlatdibris.github.io/

[6] L. Franceschini, RML: Runtime Monitoring Language, Ph.D. thesis,
DIBRIS - University of Genova, URL http://hdl.handle.net/11567/1001856, March 2020

[7] D. Ancona, L. Franceschini, A. Ferrando, V. Mascardi, A SWI-Prolog based implementation of RML,
to appear in Workshop on Trends, Extensions, Applications and Semantics of Logic Programming, ETAPS 2020

[8] D. Ancona, L. Franceschini, G. Delzanno, M. Leotta, M. Ribaudo, F. Ricca,
Towards Runtime Monitoring of Node.js and Its Application to the Internet of Things. ALP4IoT@iFM 2017: 27-42


 Title: Novel Testing Approaches for Modern Software Systems

Proposers: Maurizio Leotta, Filippo Ricca

Curriculum: Computer Science

Short Description:

Modern software applications have a significant impact on all aspects of our society, being crucial for a multitude of economic, social, and educational activities. Indeed, a considerable slice of modern software runs on web browsers and smartphones, and a good portion of the market is occupied by IoT applications.

As a consequence, the correctness and quality of such applications is of undeniable importance. The complexity of such kind of systems combined with the ever shorter development cycles, drastically demand for novel approaches to testing. Several interesting research directions are lately emerging, ranging from the automated generation of test suites using, e.g., search based strategies to the usage of machine learning (ML) and Artificial Intelligence (AI) to even increase the effectiveness of Testing frameworks and Tools.

The PhD candidate will select one research direction, among the many available and covered by the @DIBRIS Software Testing group, and will work on defining novel approaches and solutions to improve the state of the art.

Link to the personal webpages:

https://www.disi.unige.it/person/LeottaM/

https://www.disi.unige.it/person/RiccaF/ 

References

[1] M. Polo, P. Reales, M. Piattini, C. Ebert. Test Automation. IEEE Software, 30(1), pp.84-89, 2013.

[2] M. Leotta, D. Clerissi, F. Ricca, P. Tonella. Approaches and Tools for Automated End-to-End Web Testing. Advances in Computers, 101, pp.193-237, Elsevier, 2016. 


Title: Software-Engineering the Internet of Things

Proposer(s): Gianna Reggio
Curriculum: Computer Science, Secure and Reliable Systems

Short Description: Internet of Things (IoT)[3] based systems are very recent, and pose new difficult problems to developers, as stated e.g. in [1] and [2], for which no software engineering support is available yet: "Confronted by the wildly diverse and unfamiliar systems of the IoT, many developers are finding themselves unprepared for the challenge. No consolidated set of software engineering best practices for the IoT has emerged. Too often, the landscape resembles the Wild West, with unprepared programmers putting together IoT systems in ad hoc fashion and throwing them out into the market, often poorly tested.", as stated by [2].

The thesis aims initially at assessing the state-of-the-art of IoT based systems development, surveying companies and startups, and the scarce existing literature, to identify:
- the currently used development processes, methods, and software engineering techniques, e.g. testing (if any);
- the mostly used software tools, frameworks, standards and protocols;
- the perceived problems, and unsatisfied needs.

Then, the task of capturing and specifying the requirements for an IoT-based system will be considered, with a particular emphasis in understanding which are the relevant non-functional requirements. The preliminary proposal of [4] of a method based on the UML and following the service-oriented paradigm for capturing and specifying the requirements on an IoT-based system a will be extended to cover the non-functional requirements, and validated by industrial cases studies

Finally, the work will tackle the task of designing and implementing an IoT-system starting from the requirement specifications of the previous step, proposing specific methods. The new methods can also help to understand what are the most appropriate protocols and technologies to choose.

[1] D. Spinellis. 2017. Software-Engineering the Internet of Things. IEEE Software 34, 1 (2017), 4-6. http://ieeexplore.ieee.org/document/7819398/
[2] X. Larrucea, A. Combelles, J. Favaro, and K. Taneja. 2017. Software Engineering for the Internet of Things. IEEE Software 34, 1 (2017), 24Ð28. https://doi.org/doi.ieeecomputersociety.org/10.1109/MS.2017.28
[3] IEEE Internet Initiative. 2015. Towards a definition of the Internet of Things (IoT). (2015). Available at iot.ieee.org/images/files/pdf/IEEE_IoT_Towards_Definition_Internet_of_Things_Revision1_27MAY15.pdf.
[4] Gianna Reggio. 2018. A UML-based Proposal for IoT System Requirements Specification. In MiSEÕ18: MiSEÕ18:IEEE/ACM 10th International Workshop on Modelling in Software Engineering , May 27, 2018, Gothenburg, Sweden. ACM, New York, NY, USA, Article 4, 8 pages. https://doi.org/10.1145/3193954. 3193956

Link to the group/personal webpage: http://sepl.dibris.unige.it/ 


Title: A Holistic Method for Business Process Analytics

Proposers: Gianna Reggio, Filippo Ricca
Curriculum: Computer Science, Secure and Reliable Systems

Short Description: In the last decade, the availability of massive storage systems, large amounts of data (big data) and the advances in several disciplines related to data science provided powerful tools for potentially improving the business activities of the organizations. Unfortunately, it is rather difficult to graft modern big data practices into existing infrastructures and into company cultures that are ill-prepared to embrace big data, for example [1] reports the following staggering figures about the success rate of big-data projects:" A year ago, Gartner estimated that 60 percent of big data projects fail. As bad as that sounds, the reality is actually worse. According to Gartner analyst Nick Heudecker? this week, Gartner was "too conservative" with its 60 percent estimate. The real failure rate? "closer to 85 percent." 

In other words, abandon hope all ye who enter here, especially because "the problem isn't technology," Heudecker said. It's you. "

Initially, we plan to investigate which are the reasons leading to the failure of big-data projects by surveying scientific and grey literature, and also if and how
the few existing approaches to support big-data/analytics projects (e.g. CRISP-DM [6] and DataOps [7]) can overcome them.

Then, we will consider the restrict field of "Business Process Analytics" (BPA), that refers to collecting and analysing the business process-related data to answer some process-centric questions (see, e.g. [3] and [2]).
Based on the initial investigations, the aim of the thesis is to develop a holistic method combining business process modelling and data-driven business process improvement to successfully leverage big-data. The method will help:
- connect the business processes, and the stakeholderÕs goals with the available data;
- elicit the right questions for improving the business activities, and successively selecting the right analytic techniques for answering them;
- optimize the data collection and storage with respect the useful analysis.

Some initial ideas can be found in [4].

References:
[1] M. Asay. 85% of big data projects fail, but your developers can help yours succeed. TechRepublic, CBS Interactive.
November 10, 2017. www.techrepublic.com/article/85-of-big-data-projects-fail-but-your-developers-can-help-yours-succeed/
[2] S.Beheshti,B.Benatallah,S.Sakr,D.Grigori,H.Motahari-Nezhad,M.Barukh,A.Gater, and S. Ryu. Process Analytics: Concepts and Techniques for Querying and Analyzing Process Data. Springer, 2016.
[3] M. zur MŸhlen and R. Shapiro. Business Process Analytics, pages 137Ð157. Springer, 2010.
[4] Reggio G., Leotta M., Ricca F., Astesiano E. Towards a Holistic Method for Business Process Analytics. In: Zhang L., Ren L., Kordon F. (eds) Challenges and Opportunity with Big Data. Monterey Workshop 2016. Lecture Notes in Computer Science, vol 10228. Springer. 2017.
[6] Cross-industry standard process for data mining (CRISP-DM). https://en.wikipedia.org/wiki/Cross-industry_standard_process_for_data_mining. Last seen March 2018.
[7] The DataOps Manifesto. http://dataopsmanifesto.org/. Last seen March 2018.

Link to the group or personal webpage: http://sepl.dibris.unige.it/index.php


Title: Types for asynchronous networks

Proposer(s): Elena Zucca, Paola Giannini (Univ. Piemonte Orientale)
Macro-area(s): Secure and reliable systems
Curriculum: Computer science

Short Description: *Global types* [1,2] are used to model the intended interaction structure among multiple participants in a network, with their *projection* on a single participant modeling its role in the interaction. Delegation, an essential feature of multiparty interaction can be smoothly integrated in global types, as [3] shows. We will investigate global types for networks where interaction is possibly asynchronous, with the aim of expressing various safety and liveness properties, by suitable well-formedness, projection and subtyping definitions. Since behaviour of networks is generally infinite, expressing global types and related properties will heavily rely on *coinduction* and *flexible coinduction* [4]. We plan to implement the developed typechecking algorithms, and to possibly employ proof assistants supporting coinductive reasoning, such as Agda.

[1] K. Honda, V.T. Vasconcelos, M. Kubo, Language primitves and type discipline for structured communication-based programming. ESOP 1998: 122-138
[2] K. Honda, N. Yoshida, M. Carbone, Multiparty Asynchronous Session Types. POPL 2008: 273-284
[3] I. Castellani, M. Dezani-Ciancaglini, P. Giannini, R. Horne: Global types with internal delegation. Theor. Comput. Sci. 807: 128-153 (2020)
[4] D. Ancona, F. Dagnino, E. Zucca, Generalizing Inference Systems by Coaxioms. ESOP 2017: 29-55.


Research line: Human-Computer Interaction 

 

Titolo: Interactive Sonification of Human Movement

Proposers: Antonio Camurri, Andrea Cera, Gualtiero Volpe 

Curriculum: Computer Science

Description.

Information visualization if a well-known field in computer science to convey information and in human-computer interfaces. Recent neuroscience research has shown the importance of sonification: our brains use all available sensory feedback, including sound, to keep track of the changing structure and position of the body in space and to adjust actions. The relation between sound and movement is supported by tight links between auditory and motor areas of the brain. For instance, listening to rhythms activates motor and premotor cortical areas, hence the use of rhythmic acoustic feedback to entrain movement. In addition, natural or artificial sounds such as tones and music have been shown to trigger emotional responses in listeners. Studies on the human brain have shown unlearned preference for certain types of sound, such as harmonic and periodic sounds of which music is a particular case. This growing body of work supports the use of sonification of movement and related processes as a powerful way to increase positive body awareness and facilitate engagement with movement. Sonification of body movement, as a means to inform, has been shown to improve motor control and possibly motor learning.

The proposed PhD research project is a part of a broader project exploring how movement qualities can be recognized by means of the auditory channel: can we perceive an expressive full-body movement quality by means of its interactive sonification? The research will investigate cross-modal correspondences (Spence 2011) to design computational models and systems implementing sonification models (based on sound signal synthesis and processing) and capable to “translate” movement qualities into the auditory channel in real-time.  The starting points for the research in this PhD  are the papers by Singh et al (2016) and Niewiadomski et al (2019).

Research activities will benefit from results from the European funded project FET PROACTIVE EnTimeMent, and short residencies at the premises of one or more partners of EnTimeMent, including UCL University College London (Prof Nadia Berthouze). The research will include experiments and the development of prototype applications in one of the EnTimeMent scenarios, including serious games for therapy and rehabilitation (in collaboration with the Gaslini Children Hospital, and with UCL), sport, or artistic and education applications.

Disciplines: Human-Computer Interaction, Sound and music computing, Sonic interaction design, affective computing, Multimodal interfaces and systems.

Link to the group/personal webpage:

www.casapaganini.org

entimement.dibris.unige.it

ariel.unige.it  (Joint Augmented Rehabilitation Lab DIBRIS – Gaslini Children Hospital)

References

  1. Singh, S. Piana, D. Pollarolo, G. Volpe, G. Varni, A. Tajadura-Jiménez, A. CdeC Williams, A. Camurri, N. Bianchi-Berthouze (2016) Go-with-the-Flow: Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain. Human–Computer Interaction, Taylor & Francis, 31(3-4), pp.335-383.
  2. Niewiadomski, M. Mancini, A. Cera, S. Piana, C. Canepa, A. Camurri (2019) Does embodied training improve the recognition of mid-level expressive movement qualities sonification? Journal on Multimodal User Interfaces, 13(3), pp.191-203.

Spence C (2011) Crossmodal correspondences: a tutorial review. Atten Percept Psychophys 73(4):971–995. https://doi.org/10.3758/s13414-010-0073-7

EU FET PROACTIVE EnTimeMent Project (entimement.dibris.unige.it)


Titolo: Affective Motion Capture

Proposers: Antonio Camurri, Giorgio Gnecco, Marcello Sanguineti, Gualtiero Volpe 

Curriculum: Computer Science

Description. The decreasing cost of whole-body sensing technology and its increasing reliability are leading to innovative techniques and technologies capable to recognize people’s affective states. A growing interest in, and understanding of, the role played by full-body expressions as a powerful affective communication channel is consolidated in both industry and research institutions (Kleinsmith and Berthouze 2013). Motion-capture technology is moving beyond the mere tracking of low-level features (such as position and speed) and the recognition of “what” movement is performed. Rather, the automated analysis of “how” a movement is performed at different temporal scales opens a wide range of applications, from therapy and rehabilitation to several cultural and creative industry applications. In this direction, a growing amount of investment is made on novel motion-capture technologies capable to analyze and predict the expressive, affective, and social qualities in both individual and group behavior.

This proposal focuses on research in affective body expression perception, recognition, and prediction. It will benefit from the ongoing activities of the European-funded FET PROACTIVE EnTimeMent 4-year project (entimement.dibris.unige.it). EnTimeMent aims at the foundation and consolidation of radically new models and motion analysis technologies for automated prediction and analysis of human movement qualities, entrainment, and non-verbal full-body social emotions. The approach is grounded on novel neuroscientific, biomechanical, psychological, and computational evidence dynamically suited to the human time, towards time-adaptive technologies operating at multiple time scales in a multi-layered approach. The research will benefit also from the motion capture and multimodal technology infrastructure available at Casa Paganini-InfoMus of Dibris (www.casapaganini.org). Specific application testbeds to validate and evaluate research results will be identified in one of the EnTimeMent scenarios (cognitive-motor rehabilitation, performing arts, sport).

Research activities will include collaborations and short residencies at the premises of one or more partners of the EnTimeMent project, including Qualisys (motion capture industry), Euromov – University of Montpellier (Prof Benoit Bardy), UCL University College London (Prof Nadia Berthouze), with IMT - School for Advanced Studies, Lucca (Prof Giorgio Gnecco), and with the incubators of startups GDI Hub (London) and Wylab (Chiavari),

Disciplines: Human-Computer Interaction, Affective computing, Motion capture, Multimodal interfaces and systems, Operation research.

Link to the group/personal webpage:

www.casapaganini.org

entimement.dibris.unige.it

References

Kleinsmith, A.,  Bianchi-Berthouze, N. (2013)" Affective Body Expression Perception and Recognition: A Survey," IEEE Transactions on Affective Computing, vol. 4, no. 1, pp. 15-33, Jan.-March 2013, doi:10.1109/T-AFFC.2012.16

EU FET PROACTIVE EnTimeMent Project (entimement.dibris.unige.it)


Titolo: Automated analysis of expressive qualities of full-body human movement

Proposers: Antonio Camurri, Giorgio Gnecco, Marcello Sanguineti, Gualtiero Volpe

Curriculum: Computer Science

 Description. The role played by full-body movements in conveying affective expressions and social signals is widely recognized by the scientific community [1], and a growing number of applications exploiting full-body expressive movement and non-verbal social signals is available. The possibility to automatically measure movement qualities is very valuable in many different interactive applications, including therapy and rehabilitation in autism, and in cognitive and motor disabilities. In [2,3], the automated analysis of the origin of movement (i.e., where in the body the movement initiates), which is an important component in understanding and modelling expressive movement, was investigated.

In the thesis, the approach proposed in [2,3] will be developed in several directions by using the same approach, based on a mathematical model called cooperative game. In general, mathematical games [4] study interactions among subjects, by modelling conflict or cooperation between intelligent entities called players. In the analysis of full-body human movement, the game model is built over a suitably-defined three-dimensional structure representing the human body. The players represent a subset of body joints. Each group of players has an associated utility, which represents their joint contribution to a common task. Using a utility constructed starting from a movement-related feature such as speed, a cooperative game index called Shapley value [4] can be exploited to analyse expressive qualities (e.g., to identify the movement origin, as done in [2,3] using the feature speed). Targets of the proposed thesis include, e.g:

- Considering different formulations of the cooperative game and/or different cooperative indices.

- Extracting and embedding into the model other features and/or sets of features calculated from movements,     such as position, acceleration, speed, jerks, and angular acceleration.

- Investigating the time series of the Shapley values to capture the dynamics of movement in finer details (e.g., the importance of different timescales in recognizing a specific movement).

- Modelling biomechanical constraints, which determine the way we move as well as the way we perceive movements.

- Analysing the automatic detection of movement qualities different from the origin of movement.

- Conceiving novel experiments, in order to build up the movement repertoire and enlarge the available motion-capture data set.

As an outcome of the thesis, a larger set of computational methods and software tools will be available for the automatic analysis of expressive qualities associated with full-body human movement.

This thesis will benefit from the ongoing activities of the European-funded FET PROACTIVE EnTimeMent 4-year project (entimement.dibris.unige.it). EnTimeMent aims at the foundation and consolidation of radically new models and motion analysis technologies for automated prediction and analysis of human movement qualities, entrainment, and non-verbal full-body social emotions. The approach is grounded on novel neuroscientific, biomechanical, psychological, and computational evidence dynamically suited to the human time, towards time-adaptive technologies operating at multiple time scales in a multi-layered approach. The research will benefit also from the motion capture and multimodal technology infrastructure available at Casa Paganini-InfoMus of Dibris (www.casapaganini.org). Specific application testbeds to validate and evaluate research results will be identified in one of the EnTimeMent scenarios (cognitive-motor rehabilitation, performing arts, sport). 

Research activities will include collaborations and short residencies at the premises of one or more partners of the EnTimeMent project, including Qualisys (motion capture industry), Euromov – University of Montpellier (Prof Benoit Bardy), UCL University College London (Prof Nadia Berthouze), with IMT - School for Advanced Studies, Lucca (Prof. Giorgio Gnecco), and with the incubators of startups GDI Hub (London) and Wylab (Chiavari),

Disciplines: Human-Computer Interaction, Affective computing, Motion capture, Multimodal interfaces and systems, Operation research.

Link to the group/personal webpage:

www.casapaganini.org

entimement.dibris.unige.it

References

[1] A. Kleinsmith and N. Bianchi-Berthouze, “Affective body expression perception and recognition: A survey,” IEEE Transactions on Affective

Computing, vol. 4, no. 1, pp. 15–33, 2013.

[2] K. Kolykhalova, G. Gnecco, M. Sanguineti, A. Camurri, and G. Volpe: Graph-restricted game approach for investigating human movement qualities”. Proc. 4th Int. Conf. on Movement Computing (MOCO ’17). London, UK: ACM, 2017, article no. 30, 4 pages.

[3] K. Kolykhalova, G. Gnecco, M. Sanguineti, A. Camurri, and G. Volpe: Automated analysis of human movement qualities: An approach based on transferable-utility games on graphs,” submitted.

[4] M. Maschler, E. Solan, and S. Zamir, Game Theory. Cambridge, UK: Cambridge University Press, 2013.


Title: Self avatar and embodiment for Human Computer Interaction in Mixed Reality

Proposer: Manuela Chessa and Fabio Solari
Curriculum : Computer Science

Short Description Interaction in Virtual or Augmented Reality (Mixed Reality to indicate the coexistence of both virtual and real elements) environments is obtained by means of video-based solutions (RGBD sensors) or tracking devices (controllers, sensorized gloves). Similarly, users’ navigation in the virtual world is obtained by tracking the 6DOF position of the users’ head. Though the hands and the head 6DOF positions are tracked and the VR environment is updated accordingly, most applications only display a partial representation of the user, such as the controllers or models of the hands. In the literature, there are many works trying to understand whether a self-avatar would have a positive benefit to interaction tasks, the sense of presence and perceptual judgments [1]. Moreover, the graphic properties of the self-avatar could lead to the observation of the uncanny valley problem [2]. Finally, the manipulation of the self-avatar can be the software support for both innovative entertainment applications and for understanding self-consciousness, with potential application in the context of neuro-rehabilitation, pain treatments, and to contribute to the understanding of neurological and psychiatric disease.

The aim of this research theme is to develop novel and efficient solutions to create and manipulate self-avatar inside virtual and augmented reality environments, addressing the alignment and the spatial co-localization of both virtual and real elements, but also considering nonrealistic and “impossible” situations. Moreover, the realism and the graphics properties of the avatar should be considered by addressing computer graphics techniques. The effect of the presence of the self-avatar, also considering its realism and degree of complexity, will be examined with respect to efficacy of the interaction, embodiment, sense of presence and acceptance of the developed systems.

[1] Pan Y, Steed A (2017) The impact of self-avatars on trust and collaboration in shared virtual environments. PLoS ONE 12(12): e0189078. https://doi.org/10.1371/journal.pone.0189078

[2] Schwind, V., Wolf, K., & Henze, N. (2018). Avoiding the uncanny valley in virtual character design. Interactions, 25(5), 45-49.

Link to the group/personal webpage: 

www.dibris.unige.it/en/chessa-manuela

www.dibris.unige.it/en/solari-fabio


 Title: Multimodal interactive systems based on non-physical dimensions of touch

Proposers: Antonio Camurri, Enrico Puppo, Davide Anguita, Gualtiero Volpe

Description This PhD proposal aims at investigating computational models and developing techniques and systems for the automated measure of tactility: how non-verbal, social, affective content usually communicated and perceived by touch can be communicated and perceived without any physical contact. Can tactility be as effective as the physical one in socio-emotional communication? Scientific research (e.g., McKenzie et al., 2010) as well as artistic theories and practice (e.g., the dance) demonstrate the existence of tactility. Humans are able to perceive touch even in cases of lack of physical contact, since movement alone may induce in an observer the perception of touch. Touch conveys emotions, facilitates or enhances compliance in social interactions. Touch reduces the negative effects of several chronic disease. Illusory touch occurs when people believe they have been touched but no actual tactile stimulation has been applied. This PhD proposal focuses on computational models of tactility, that is, to study and develop systems to enable the communication and perception of non-verbal, social, affective content usually communicated and perceived by touch, but without any physical contact. Tactility is the carrier of non-verbal emotional and social communication. Research challenges include the following: how does an observer perceive tactility and its role in socio-emotional interaction? Does an observer of tactility performed on a "ghost" body perceive the same socio-affective message as on a physical body? 

Proposed work plan
- Assessment of the interdisciplinary existing state of the art: motion capture, biomechanics, crossmodal perception (Spence 2011), humanistic theories and computational models of non-verbal multimodal full-body movement analysis (Kleinsmith & Berthouze 2016), social signal processing (Vinciarelli et al 2012), analysis of 3D trajectories, machine learning. Software environments for the development of real-time multimodal systems (EyesWeb http://www.infomus.org/eyesweb_ita.php);
- Design of a dataset and of pilot experiments. The dataset consist of a pool of movements performed by a number of pairs or small groups of participants highly skilled in movement execution (e.g., dancers) as well as poorly skilled. For example, two participants in front of each other at a few steps of distance; the first slowly walks to approach and touch the other (e.g. on a shoulder); then she returns to the original position, the second leaves the scene, and the first repeats the same action and touches the “ghost” of the second participant: she touches the memory, a sort of tactile photography.

The dataset will be recorded using the Qualysis (www.qualisys.com) motion capture and other sensor systems (physiology, IMUs, audio) available at DIBRIS premises of Casa Paganini-InfoMus;
- Analysis of tactility: extraction of a collection of multimodal features from the recorded data that explain the difference of same touch gesture on a real human Vs the “ghost”;
- Assessment of the analysis outcomes by comparison with ratings of the same dataset provided by human participants;
- Development, evaluation and validation of prototypes of multimodal systems exploiting tactility.

Expected results
- A collection of an archive of MoCap and multimodal data for the analysis of tactility, to be made publicly available to the research community;
- Development of novel algorithms, techniques, and software libraries for the automated analysis of tactility;
- Scientific publications in top-level international conferences and journals;
- Development of prototypes of systems exploiting tactility in at least one of the following scenarios: therapy and rehabilitation in specific activities of the ARIEL (Augmented Rehabilitation) Joint Laboratory DIBRIS-Gaslini Children Hospital; enhanced active experience of cultural heritage in collaboration with Palazzo Reale Museum in Genoa;
- Participation to public dissemination events: e.g., European Commission events, international workshops and conferences, summer schools, science festivals;
- The research may be part of international projects, including European funded Horizon 2020 ICT projects, running at Casa Paganini-InfoMus research centre. 

Link to the group or personal Webpage:

www.casapaganini.org www.youtube.com/InfoMusLab dance.dibris.unige.it

Casa Paganini – InfoMus Research Centre publications:  http://www.infomus.org/publications_ita.php

References
- Camurri, A., & Volpe, G. (2016). The Intersection of art and technology. IEEE MultiMedia, 23(1), 10-17.
- Kleinsmith, A., Bianchi-Berthouze, N. (2013). Affective body expression perception and recognition: A survey. IEEE Transactions on Affective Computing4(1), 15-33.
- McKenzie, K. J., Poliakoff, E., Brown, R. J., and Lloyd, D. M. (2010). Now you feel it, now you don't: how robust is the phenomenon of illusory tactile experience? Perception, 39(6), 839-850.
- Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971-995.
- Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D'Errico, F., & Schroeder, M. (2012). Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Transactions on Affective Computing3(1), 69-87.


Title: Techniques for the design and implementation of (Spatial) Augmented Reality Environments

Proposer: Manuela Chessa
Curriculum : Computer Science

Short Description Augmented Reality (AR) allows a real-time blending of digital information (e.g. text, virtual elements, images, sounds) onto the real world. Among the different technologies to design and implement AR scenarios, we can distinguish between wearable devices (e.g. the HoloLens) and non-wearable solutions, in particular Spatial Augmented Reality (SAR). This approach allows displaying additional information, virtual objects, or even changing the appearance of the physical objects directly onto the real environment. It is worth noting that,  compared to head mounted displays AR displays or handheld devices, SAR has some advantages, e.g. it allows the interaction with physical (yet augmented) objects and it scales well to multiple users and therefore supports collaborative tasks naturally. On the other side, many issues are still open, e.g., a robust detection of the 3D structure of the environments and the registration between virtual and real contents. Moreover, the combination with handled or wearable devices can further improve the range of possible solutions and interaction systems.

The research theme aims to develop novel techniques to create AR and SAR environments in which people can interact in an ecological way. Besides virtual visual information added to the real world, also sensorized objects providing controlled force and tactile feedbacks could be used to augment reality and devise novel interaction paradigms.

Link to the group/personal webpage: 

www.dibris.unige.it/en/chessa-manuela

www.manuelachessa.it


Research line: Systems Engineering 

Title:  Sustainable planning and control of distributed power and energy systems

Proposers: R. Minciardi, M. Robba
Curriculum: Systems Engineering

Description: The increase in the use of renewable energies, the emergence of distributed generation and storage systems, and, in general, the concept of “smart grids”, have given rise to the necessity of defining new decision and control schemes for planning and management purposes. Currently, a major challenge is represented by the lack of a unified mathematical framework including robust tools for modeling, simulation, control and optimization of time critical operations in complex multicomponent and multiscaled networks, characterized by microgrids, interconnected buildings, renewables, storage systems and electric vehicles. The difficulty of defining effective real time optimal control schemes derives from the structure of a power grid, and, specifically, from the presence of several issues: renewable and traditional power production, bidirectional power flows, dynamic storage systems, demand response requirements, and stochastic aspects (such as uncertainties in renewable, prices, and demand forecasting). This results in optimization problems, which are generally intractable within a real time optimal control scheme, if all components of the whole system are represented at a full level of detail. Moreover, the new regulation related to new market entrants and schemes requires a revision and improvement of distributed energy management systems planning and management, as well as their coordination in order to optimize self-consumption and energy distribution.

The proposed PhD research activity will fall within this framework and has the objective of developing and applying tractable approaches for planning and optimal control, taking into account stochastic issues (i.e., intermittent renewables, demands, prices) and considering different possible architectures (multilevel, decentralized, distributed). In particular, the formulation of the optimization and control problems will be based on realistic models for the electrical grid and for its various sub-systems (microgrids, intelligent buildings, storage systems, renewables). Moreover, different energy distribution systems will be taken into account in relation to polygenerative systems: district heating, buildings heating and cooling, with the associated storage systems, water, and gas distribution networks. Finally, different kinds of demands will be taken into account (heat, cool, electricity), as well as electrical vehicles with charging/discharging cycles within a smart grid. 

References

F. Delfino, R. Minciardi,  F. Pampararo, M. Robba. A Multilevel Approach for the Optimal Control of Distributed Energy Resources and Storage, IEEE Transactions on Smart Grid, Special Issue on Control Theory and Technology in Smart Grid, to appear

S. Bracco, F. Delfino, F. Pampararo, M. Robba, M. Rossi. A mathematical model for the optimal operation of the University of Genoa Smart Polygeneration Microgrid: Evaluation of technical, economic and environmental performance indicators, Energy, 2013

H. Dagdougui, R. Minciardi, A. Ouammi, M. Robba, R. Sacile. A dynamic decision model for the real time control of hybrid renewable energy production systems, IEEE Systems Journal, Vol. 4, No. 3, 2010, p. 323-333.


Title: Optimal routing and charging of electrical vehicles in a smart grid.
Proposers: Riccardo Minciardi, Massimo Paolucci, Michela Robba
Curriculum:  System Engineering

Description: At international level, different new policies have been developed to reduce CO2 emissions, such as the Kyoto Protocol, the European 20-20-20 strategy, and the Energy roadmap 2050. The result is an increase of green technologies for energy production and transportation. Due to the presence of intermittent and distributed production (such as renewables (RES)) and loads (such as electrical vehicles (EVs)), the actual electrical grid management has to be changed, and new control strategies are necessary for the integration of electrical and transportation networks. In fact, on one side, EVs need to be charged in the fastest time possible, and, on the other side, smart grids should afford such a request. In particular, from users’ perspective it is necessary to know electrical consumes over a specific path, and to decide where and how much to charge EVs to satisfy their own travel exigencies. Instead, from the grid perspective, it is necessary to offer adequate charging facilities based on control strategies that are able to satisfy users but guaranteeing electrical grid constraints. The proposed PhD research activity will fall within this framework. In particular, the following main objectives/activities can be listed:

  • Definition and development of a discrete event optimization model for microgrids with EVs.
  • Definition and development of power management strategies for charging stations.
  • Optimal routing and charging of EVs: development of meta- and math-heuristics.

Demonstration activities with real charging stations and case studies (in collaboration with companies) are foreseen  during the three years.

Link to personal homepage

http://www.dibris.unige.it/minciardi-riccardo

http://www.dibris.unige.it/robba-michela

http://www.dibris.unige.it/paolucci-massimo

References:

Schneider, M., Stenger, A., Hof, J., 2014, An Adaptive VNS Algorithm for Vehicle Routing Problems with Intermediate Stops, in Technical Report LPIS-01/2014.

Yagcitekin, B., Uzunoglu, M., 2016, A double-layer smart charging strategy of electric vehicles taking routing and charge scheduling into account,


Title:  Smart scheduling approaches for manufacturing industry.

Proposer: Massimo Paolucci
Curriculum:  System Engineering

Short description: Scheduling in manufacturing industry involves key decisions about how to exploit at best the available resources (e.g., machines, tool, workers, energy) in order to efficiently perform the required production activities. Scheduling decisions are at the operational level, that is, they regard a short time planning horizon (a day or shift) and must take into account detailed production conditions and requirements. In real manufacturing industries scheduling problems are at a large scale (the number of activities to be performed may be huge and workshops may include many machines and tools) so that the number of possible alternative decisions usually grows exponentially. In addition, even if scheduling problems share common features, several relevant differences exist which characterize different industrial sector (e.g., food and beverage, fashion, automotive). Therefore an effective general purpose solution approach that could represent the basis for developing scheduling systems for different sectors, avoiding to restart from scratch with a specific algorithm, seems not available. The introduction of the Industry 4.0 paradigm will allow to rely on fresh data from the field, so improving the possibility of planning, adapting and revising the scheduling decisions more effectively, even reacting to the unpredicted changes that usually characterize the real production systems. Finally, sustainability issues, as energy consumption and carbon footprint, need to be included among the scheduling objectives.

The purpose of this research project is to design a new solution approach for facing a large class of the scheduling problems emerging in manufacturing industry. Such an approach can be based on several building block and strategies (recent metaheuristics as adaptive large neighborhood search or bio-inspired algorithms, simulation-optimization as well as heuristics based on mathematical programming) that can be exploited to design a solver framework for this class of hard combinatorial problems. 

Link to the group or personal webpage:

http://www.dibris.unige.it/paolucci-massimo


Title: Strategic and tactical planning in production and logistics for manufacturing industry

Proposer(s): Massimo Paolucci
Curriculum:  System Engineering

Short description: Planning at strategic and tactical level are connected problems influencing both the design of the supply chain and the manufacturing production activities. Such decisions usually involve the activation of facilities, the allocation of the available resources, as well as the aggregate management of both production and distribution activities over a medium-long time horizon. Apparently strategic and tactical planning decisions impact not only business objectives but also environmental sustainability; as an example, in closed loop supply chains planning decisions also include the use of recovered materials and components from returned products and in general the reduction of energy consumption.

Therefore the proposed research project aims at considering the problem of planning in the supply chain for manufacturing production systems in order to define a general purpose decision support system able to operate both at strategic and tactical level. The purpose is to determine a unified model and a set of optimization approaches to support planning decisions at different levels (e.g., supply network design, inventory and lot-size planning, distribution planning), including sustainability aspects, such as remanufacturing and energy consumption and emissions. Since at least part of the considered optimization problems are computationally intractable as they belong to the NP-hard complexity class, the algorithms that need to be designed and tested can range from exact approaches, based on mathematical programming models, to heuristic, metaheuristics (from neighborhood search techniques to population and bio-inspired algorithm) or matheuristics (i.e., methods that include mathematical programming models in a heuristic solution framework).

Link to the group or personal webpage:

http://www.dibris.unige.it/paolucci-massimo


Title:  Hyper- and meta- heuristics for multi-objective optimization

Proposer: Massimo Paolucci
Curriculum:  System Engineering

Short description: Most of the decision problems in real life applications require to take into account more than one objective/criterion. Usually such objectives are non-commensurable and conflicting. These problems arise in many different fields and often express the conflict between customer satisfaction, stakeholders profit and social and environmental sustainability. As an example, in manufacturing industry and logistics, planning the activities on the supply chain should aim at timely meeting the customer demand, reducing production, inventory and transportation costs, minimizing the energy consumption and CO2 emission, favoring material recycling and so on. Multi-criteria decision making deals with this wide class of decision problems and embeds Multi-objective optimization as those methods whose purpose is to define the set of solutions which deserve to be considered by decision makers. Such solutions are the so-called efficient or Pareto optimal ones. In general the problem of determining the Pareto optimal solutions for a multi-objective optimization problem is NP-hard and the dimension of such set of solutions is exponential. For this reason, metaheuristic algorithms, such as Genetic Algorithms, Simulated Annealing, Ant Colony Optimization and Particle Swarm Optimization, have been applied to multi-objective optimization since 90s. The purpose of this research is to deep investigate the possible use of metaheuristics for multi-objective optimization, trying in particular to design general purpose self-adapting algorithms. This can be pursued by experimenting the so-called hyper-heuristics, consisting in combining higher level metaheuristics with lower level metaheuristics, where the purpose of the former is to identify the best configuration of the latter when solving a given optimization problem.


 Transportation Network Optimization Via Transferable-Utility Games

Proposer: Marcello Sanguineti
Curriculum: Systems Engineering

Short description. Network connectivity is an important aspect of any transportation network, as the role of the network is to provide the society with the ability to easily travel from point to point using various modes. Analyzing networks' connectivity can assist the decision makers with the identification of weak components, to detect and prevent failures, and to improve the connectivity in terms of reduced travel time, reduced costs, increase reliability, easy access, etc..

A basic question in network analysis is: how “important” is each node? An important node might, e.g., highly contribute to short connections between many pairs of nodes, handle a large amount of the traffic, generate relevant information, represent a bridge between two areas, etc. To quantify the relative importance of nodes, one possible approach consists in using the concept of “centrality” [1, Chapter 10]. A limitation of classical centrality measures is the fact that they evaluate nodes based on their individual contributions to the functioning of the network. For instance, the importance of a stop in a transportation network can be computed as the difference between the full network capacity and the capacity when the stop is closed. However, such an approach is inadequate when, for instance, multiple stops can be closed simultaneously. As a consequence, one needs to refine the existing centrality measures, in such a way to take into account that the network nodes do not act merely as individual entities, but as members of groups of nodes. To this end, one can exploit game theory [2], which, in general terms, provides a basis to develop a systematic study of the relationship between rules, actions, choices, and outcomes in situations that can be either competitive or non-competitive.

The idea at the roots of game-theoretic centrality measures [3] is the following: the nodes are considered as players in a cooperative game, where the value of each coalition of nodes is determined by certain graph-theoretic properties. The key advantage of this approach is that nodes are ranked not only according to their individual roles in the network, but also taking into account how they contribute to the roles of all possible groups of nodes. This is important in various applications in which a group's performance cannot be simply described as the sum of the individual performances of the group members involved. In the case of transportation networks, suppose we have at our disposal a certain budget. One possible approach consists in addressing the question of whether investing all the money in increasing the capacity and/or service of a transportation component (road section, bridge, transit route, bus stop, etc.)  substantially improves the whole network. A better way of proceeding for the network analyst/designer would probably consist in considering to simultaneously improve a (possibly small) subset of the components. In this case, to evaluate the importance of a component one has to take into account the potential gain of improving one component as a part of a group of components, not merely the potential gain of improving the component alone. This approach can be formalized in terms of cooperative game theory [2], where the nodes are players whose performances are studied in coalitions, i.e., subsets of players.

This research project, which takes the hint from the works [4,5], consists in developing methods and tools from a particular type of cooperative games, called “cooperative games with transferable utility”, for brevity “TU games”, to optimize transportation networks. Given a transportation network a TU game will be defined, which takes into account the network topology, the weights associated with the arcs, and the demand based on the origin-destination matrix (weights associated with nodes). The nodes of the network represent the players of the TU game.

We aim at exploiting game-theoretic solution concepts developed during decades of research to identify the nodes that play a major role in the network. In particular, we shall use the so-called solution concept known as Shapley value [2], which represents a criterion according to which each node is attributed a value, in such a way that the larger the value the larger the node importance.  The Shapley value enjoys mathematical properties well-suited to the proposed analysis. Computational aspects related to the evaluation of the Shapley value will be investigated, too [6], studying the possibility of polynomial-time computation with respect to the network dimension.

Depending on whether the analysis focuses on the “physical nodes” or the “physical links”, the definition of the player changes. This research project considers both. When the transportation nodes (representing, e.g., intersections, transit terminals, bus stops, major points of interest, etc.) will be analyzed, the network on which the TU game will be defined is identical to the physical network. On the other hand, when arcs (e.g., road segments, transit routes, rail lines, etc.) will be analyzed, the network will be transformed in such a way that the physical links are modeled as nodes.

Link to the group/personal webpage:

http://www.dist.unige.it/msanguineti/

References 

[1] S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications. Vol. 8. Cambridge University Press, 1994.

[2] J. González-Díaz, I. García-Jurado, and M.G. Fiestras-Janeiro, An Introductory Course on Mathematical Game Theory. AMS, 2010.

[3] T.P. Michalak, Game-Theoretic Network Centrality - New Centrality Measures Based on Cooperative Game Theory, 2016. Available from: http://game-theoretic-centrality.com/index.html.

[4] Y . Hadas and, M. Sanguineti, An Approach to Transportation Network Analysis Via Transferable-Utility Games. 96th Annual Meeting of the Transportation Research Board, Transportation Research Board of the National Academies, Washington, DC, 8-12 gennaio 2017.

[5] Y. Hadas, G. Gnecco, M. Sanguineti, An Approach to Transportation Network Analysis Via Transferable    Utility


 

You are here: Home Activity Overview Computer Science Workshop Article Specific Research Themes 2020