March 25, 2022 VIRTUAL
The PeRConAI workshop aims at fostering the development and circulation of new ideas and research directions on pervasive and resource-constrained machine learning, bringing together practitioners and researchers working on the intersection between pervasive computing and machine learning, stimulating the cross-fertilization between the two communities.
suitable solutions for advancing towards a truly pervasive and liquid AI
PeRConAI will focus on solutions suitable for advancing towards a truly pervasive and liquid AI enabling edge devices, regardless of their available resources, to accomplish both training and inference under full, weak or no supervision.
The increasing pervasiveness of edge devices and the high availability, velocity, and volatility of data generated and collected at the edge of the internet are pushing towards a paradigm shift in the design of AI-based systems. AI systems are moving the execution of both training and inference tasks from powerful and remote data centers where all data is available in a centralized fashion to more pervasive and distributed/decentralized systems at the edge of the internet, working in proximity to where data is physically generated and/or collected.
Differently from the cloud context where AI systems can rely on known and controllable high-performance compute infrastructures, the design of edge AI systems must leverage the collaboration of several heterogeneous devices working in a highly dynamic context both in terms of data and connectivity. Precisely, devices at the edge are very often resource-constrained, i.e., they might have limited compute capabilities, memory, or battery availability, to mention a few. Connectivity, although quite present, might be intermittent due to external factors (e.g., wireless coverage shortages) or for internal reasons (energy saving policies) of battery-powered edge devices. Beyond resources limitations, data locally collected or generated by devices might statistically differ from one device to another even if they are collected by the same application or belong to the same phenomenon. Finally, human intervention in the AI process is still predominant, especially in its initial phases, e.g., data preparation, labelling, and pre-processing, thus limiting the necessary speed up to make AI truly pervasive.
we envision a future where every device at the edge of the internet will have an active role in the AI process
In the long term, we envision a future where every device at the edge of the internet, regardless of its computing capabilities, will have an active role in the AI process by processing its data and collaborating with other devices to extract knowledge from them. Although some advancements in the development of solutions suitable for training and running AI at the edge, the final goal of a pervasive and liquid AI capable of leveraging any type of device and data at its disposal is still a long way to go. To fill the gap and move towards the realization of such a pervasive AI vision, several challenges need to be addressed that can be summarised by the following open questions:
- How to design, train, and optimize advanced machine learning (ML/DL) models in pervasive contexts where edge devices have limited resources (i.e., computational power, storage, and energy)?
- How to implement and optimize distributed ML/DL systems on small devices that can collaboratively exploit local data while preserving privacy?
- How to move from heavily supervised edge ML/DL systems to weakly supervised or unsupervised systems by also leveraging unlabeled data?
- How to enable pervasive ML/DL systems running on limited devices to adapt to possibly evolving contexts?