The PeRConAI workshop aims at fostering the development and circulation of new ideas and research directions on pervasive and resource-constrained machine learning, bringing together practitioners and researchers working on the intersection between pervasive computing and machine learning, stimulating the cross-fertilization between the two communities.
All workshop papers require a full registration
- 00Days
- 00Hours
- 00Minutes
- 00Seconds
Motivation
suitable solutions for advancing towards a truly pervasive and liquid AI
PeRConAI aims to advance truly pervasive AI, empowering edge devicesregardless of their resource constraintsto perform both training and inference under full, weak, or no supervision.
As the number and capability of edge devices grow, so does the volume, velocity, and variability of data generated at the edge of the internet. This shift is catalyzing a shift from centralized AI systems, traditionally reliant on remote data centers, toward decentralized, edge-centric paradigms. These new systems must process and learn from data near its sourceat the edgewhere resources are often constrained, connectivity is intermittent, and data is inherently heterogeneous.
Unlike the predictable environments of cloud computing, edge contexts present challenges such as limited compute power, memory, and battery life, as well as unreliable or policy-limited network availability. Moreover, data collected across edge devices often varies significantly even when related to the same phenomenon, complicating model generalization. Human intervention remains a bottleneck, especially in early AI lifecycle stages like labeling and pre-processing, hampering scalability.
vision
we envision a future where every device at the edge of the internet will have an active role in the AI process
PeRConAI envisions a future where every edge device can actively participate in the AI pipeline, not only by processing local data but by collaborating with other devices to derive and share knowledge. Although progress has been made toward enabling training and inference at the edge, achieving the vision of pervasive AIflexible, adaptive, and ubiquitousremains a substantial challenge.
To bridge this gap, several critical research questions must be addressed:
- How to design, train, and optimize advanced machine learning (ML/DL) models in pervasive contexts where edge devices have limited resources (i.e., computational power, storage, and energy)?
- How to implement and optimize distributed ML/DL systems on small devices that can collaboratively exploit local data while preserving privacy?
- How to move from heavily supervised edge ML/DL systems to weakly supervised or unsupervised systems by also leveraging unlabeled data?
- How to enable pervasive ML/DL systems running on limited devices to adapt to possibly evolving contexts?

Organized by
