Work Packages

Work Package 1: On-line Training from Synthetic Data

Inria Lead: Bruno Raffin – DFKI Lead: Tim Dahmen

Under the premise that, in an increasing number of situations, training
data will be generated from parametric models and simulations, a central question is: which data
points should be generated? Both partners developed complementary approaches to solve this
question: DFKI relies on characterizing the parameter space of the models using metrics based on
image similarity, while Inria relies on statistical impact analysis. In the project, we will swap the
approaches, i.e. DFKI will apply the statistical impact analysis and Inria will apply the image-based
adaptive sampling approach, each to their respective problems.

 

Work Package 2: Reproducible Deployment and Scheduling Strategies for AI Workloads

Inria Lead: Alexandru Costan – DFKI Lead: René Schubotz

Inria has developed a methodology for reproducible experiments of
generic workflows on hybrid edge/cloud infrastructures, supporting automated execution of the
experimental cycle. The method is illustrated through the E2Clab software. DFKI has specialized
expertise on GPU virtualization, container-orchestration systems, microservice as well as
serverless architectures, DNN workflows and DNN model deployment. The work planned in WP2
will leverage both, in a unified way, as an extension of the E2Clab methodology and framework.

 

Work Package 3: Reproducible Deployment and Scheduling Strategies for AI Workloads

Inria Lead: Olivier Beaumont – DFKI Lead: Richard Membarth

Inria and DFKI teams involved in WP3 are both already
involved in memory management issues. From a methodological point of view, the two teams
use different approaches, based on exact but costly formulations for general graphs at DFKI, and
faster methods based on dynamic programming but limited to certain classes of graphs for Inria.
Our goal is to combine the two approaches and to study how they can be coupled with other
techniques such as offloading or tensor decompositions. From the point of view of computation
orchestration, the approaches are very complementary, with DFKI being expert in compiler
optimized code generation for the kernels, and Inria having the expertise in dynamic scheduling.

 

Comments are closed.