78 views
# Introducing interLink: extend Digital Twins Across Remote Resources In the ever-evolving landscape of data science and container orchestration, efficiency and simplicity are the keys. The release of interLink, with [the associated web page and documentation](https://intertwin-eu.github.io/interLink/), marks the first attempt in interTwin to manage Kubernetes pods across remote resources, striving to demonstrate that such a task can be no longer a complicated one. InterLink not only simplifies the overall process of running any pod on any remote infrastructure but also enhances the management experience, offering a lot of features tailored to streamline the flow of integrating HPC resources in the cloud endevour. Let's look at what makes interLink a promising tool for DTE field, and scientific computing in general. ## Minimal impact on the end-user experience The fundation of InterLink is its commitment to providing a seamless end-user experience. Users can continue utilizing their preferred data science frameworks without disruption, just as if they were interacting with a physical Kubernetes cluster. This level of continuity ensures that existing workflows remain intact, minimizing the learning curve typically associated with new tools. ## Built with extensions in mind We want resource provider to be in control on how to deliver container execution on their infrastructure. At the same time grappling with complex APIs and kubelet internals is not something that we want to mantain in multiple place, neither by site admins. InterLink simplifies container management by allowing users to focus solely on what matters: a straightforward REST interface for managing container lifecycles leaving any Kubernetes internals to the core implementation mantained by a community effort. In this way interLink empowers resource providers (e.g. HPC) to speed up the process of giving a Kubernetes-like access to their resources, without a black belt in cluster internals. ## Conclusions All in all with InterLink, managing DTE on distributed resources can be as simple as creating Kubernetes pods. Users will focus on what truly matters: driving innovation and achieving their goals with an efficient utilization of the computing resources, wherever they are. The first pilots with real DT use cases and resource providers show encouraging results, that has been recently presented at one of the most relevant cloud conferences, the KubeCon and CloudNativeCon 2024 in Paris - you can find here the [details](https://colocatedeventseu2024.sched.com/event/1YFfQ/pods-everywhere-interlink-a-virtual-kubelet-abstraction-streamlining-hpc-resource-exploitation-diego-ciangottini-infn) and the [video recording](https://youtu.be/M3uLQiekqo8?si=AwgMQ13ZTjJYeDrl).