Renting
Mesh AI Rental Service Overview
Mesh AI offers clients tailored access to a range of computational resources, including GPUs, CPUs, RAM, disk storage, and more. These resources are primarily managed through Docker containers, ensuring dedicated instances for each client. This guide provides an overview of our approach to resource allocation and management.
Resource Allocation on Mesh AI
Mesh AI ensures dedicated and efficient allocation of computational resources for each instance. Here’s how we manage various resources:
Resource Allocation
· GPU: Each instance is assigned exclusive access to specific GPUs, preventing performance degradation caused by shared usage among clients.
· CPU: CPU resources are allocated proportionally based on the number of GPUs per instance, with additional burst capabilities available during periods of lower demand.
· RAM: RAM is distributed in alignment with CPU resources, allowing temporary excess usage based on system availability and constraints.
· Disk: Disk storage is fixed upon instance setup, emphasizing the importance of accurate initial resource estimation.
· Miscellaneous Resources: Ancillary resources, such as shared memory, are allocated alongside GPU resources to maintain a balanced distribution.
Duration and Lifecycle
· Rental Agreements: Each instance’s operational lifespan is outlined in the rental agreement, with automatic termination at the end of the contract. Extensions may be available but are subject to market conditions and are not guaranteed.
Operating Environment
· Linux Docker Instances: Mesh AI supports a wide range of Docker images, including those from private repositories, provided correct credentials are used.
Launch Modes
· Entrypoint/Args: For straightforward container operations.
· SSH: Secure remote access for management and interaction.
· Jupyter: Interactive computing sessions for data analysis and development.
Mesh AI’s platform is designed to deliver a flexible, efficient computing service, accommodating a variety of computational needs, from high-demand AI/ML tasks to cost-effective projects.
Last updated