One place for hosting & domains

      Instances

      Use Cases for Linode Dedicated CPU Instances


      Updated by Linode

      Written by Ryan Syracuse

      Why Dedicated CPU

      Dedicated CPU Linodes offer a complement to CPU intensive tasks, and have the potential to significantly reduce issues that arise from shared cloud hosting environments. Normally, when creating a Linode via our standard plan, you are paying for access to virtualized CPU cores, which are allocated to you from a host’s shared physical CPU. While a standard plan is designed to maximize performance, the reality of a shared virtualized environment is that your processes are scheduled to use the same physical CPU cores as other customers. This can produce a level of competition that results in CPU steal, or a higher wait time from the underlying hypervisor to the physical CPU.

      CPU Steal can be defined more strictly as a measure of expected CPU cycles against actual CPU cycles as your virtualized environment is scheduled access to the physical CPU. Although this number is generally small enough that it does not heavily impact standard workloads and use cases, if you are expecting high and constant consumption of CPU resources, you are at risk of being negatively impacted by CPU Steal.

      Dedicated CPU Linodes have private access to entire physical CPU cores, meaning no other Linodes will have any processes on the same cores you’re using. Dedicated CPUs are therefore exempt from any competition for CPU resources and the potential problems that could arise because of CPU steal. Depending on your workload, you can experience an improvement in performance by using Dedicated CPU.

      Dedicated CPU Use Cases

      While a standard plan is usually a good fit for most use cases, a Dedicated CPU Linode may be recommended for a number of workloads related to high and constant CPU processing. Such examples include:

      CI/CD Toolchains and Build Servers

      CI and CD are abbreviations for Continuous Integration and Continuous Delivery, respectively, and refer to an active approach to DevOps that reduces overall workloads by automatically testing and regularly implementing small changes. This can help to prevent last-minute conflicts and bugs, and keeps tasks on schedule. For more information on the specifics of CI and CD, see our Introduction to CI/CD Guide.

      In many cases, the CI/CD pipeline can become resource-intensive if many new code changes are built and tested against your build server. When a Linode is used as a remote server and is expected to be regularly active, a Dedicated CPU Linode can add an additional layer of speed and reliability to your toolchain.

      Game Servers

      Depending on the intensity of demands they place on your Linode, game servers may benefit from a Dedicated CPU. Modern multiplayer games need to coordinate with a high number of clients, and require syncing entire game worlds for each player. If CPU resources are not available, then players will experience issues like stuttering and lag. Below is a short list of popular games that may benefit from a Dedicated CPU:

      Audio and Video Transcoding

      Audio and Video Transcoding (AKA Video/Audio Encoding) is the process of taking a video or audio file from its original or source format and converting it to another format for use with a different device or tool. Because this is often a time-consuming and resource-intensive task, a Dedicated CPU or Dedicated GPU Linode are suggested to maximize performance. FFmpeg is a popular open source tool used specifically for the manipulation of audio and video, and is recommended for a wide variety of encoding tasks.

      Big Data and Data Analysis

      Big Data and Data Analysis is the process of analyzing and extracting meaningful insights from datasets so large they often require specialized software and hardware. Big data is most easily recognized with the the “three V’s” of big data:

      • Volume: Generally, if you are working with terabytes, petabytes, exabytes, or more amounts of information you are in the realm of big data.
      • Velocity: With Big Data, you are using data that is being created, called, moved, and interacted with at a high velocity. One example is the real time data generated on social media platforms by their users.
      • Variety: Variety refers to the many different types of data formats with which you may need to interact. Photos, video, audio, and documents can all be written and saved in a number of different formats. It is important to consider the variety of data that you will collect in order to appropriately categorize it.

      Processing big data is often especially hardware-dependent. A Dedicated CPU can give you access to the isolated resources often required to complete these tasks.

      The following tools can be extremely useful when working with big data:

      • Hadoop – an Apache project for the creation of parallel processing applications on large data sets, distributed across networked nodes.

      • Apache Spark – a unified analytics engine for large-scale data processing designed with speed and ease of use in mind.

      • Apache Storm – a distributed computation system that processes streaming data in real time.

      Scientific Computing

      Scientific Computing is a term used to describe the process of using computing power to solve complex scientific problems that are either impossible, dangerous, or otherwise inconvenient to solve via traditional means. Often considered the “Third Pillar” of modern science behind Theoretical Analysis and Experimentation, Scientific Computing has quickly become a prevalent tool in scientific spaces.

      Scientific Computing involves many intersecting skills and tools for a wide array of more specific use cases, though solving complex mathematical formulas dependent on significant computing power is considered to be standard. While there are a large number of open source software tools available, below are two general purpose tools we can recommend to get started with Scientific Computing.

      It’s worth keeping in mind that, beyond general use cases, there are many more example of tools and software available and often designed for individual fields of science.

      Machine Learning

      Machine learning is a powerful approach to data science that uses large sets of data to build prediction algorithms. These prediction algorithms are commonly used in “recommendation” features on many popular music and video applications, online shops, and search engines. When you receive intelligent recommendations tailored to your own tastes, machine learning is often responsible. Other areas where you might find machine learning being used are in self-driving cars, process automation, security, marketing analytics, and health care.

      Below is a list of common tools used for machine learning and AI that can be installed on a Linode CPU instance:

      • TensorFlow – a free, open-source, machine learning framework and deep learning library. Tensorflow was originally developed by Google for internal use and later fully released to the public under the Apache License.

      • PyTorch – a machine learning library for Python that uses the popular GPU-optimized Torch framework.

      • Apache Mahout – a scalable library of machine learning algorithms and distributed linear algebra framework designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms.

      Where to Go From Here

      If you’re ready to get started with a Dedicated CPU Linode, our Getting Started With Dedicated CPU guide will walk you through the process of an initial installation. Additionally, see our Pricing Page for a rundown of both hourly and monthly costs.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with Linode GPU Instances


      Updated by Linode

      Written by Linode

      This guide will help you get your Linode GPU Instance up and running on a number of popular distributions. To prepare your Linode, you will need to install NVIDIA’s proprietary drivers using NVIDIA’s CUDA Toolkit.

      When using distributions that are not fully supported by CUDA, like Debian 9, you can install the NVIDIA driver without the CUDA toolkit. To only install the NVIDIA driver, complete the Before You Begin section and then move on to the Manual Install section of this guide.

      For details on the CUDA Toolkit’s full feature set, see the official documentation.


      Why do NVIDIA’s drivers need to be installed?

      Linode has chosen not to bundle NVIDIA’s proprietary closed-source drivers with its standard Linux distribution images. While some operating systems are packaged with the open source Nouveau driver, the NVIDIA proprietary driver will provide optimal performance for your GPU-accelerated applications.

      Before You Begin

      1. Follow our Getting Started and Securing Your Server guides for instructions on setting up your Linodes.

      2. Make sure that your GPU is currently available on your deployed Linode:

        lspci -vnn | grep NVIDIA
        

        You should see a similar output confirming that your Linode is currently running a NVIDIA GPU. The example output was generated on Ubuntu 18.04. Your output may vary depending on your distribution.

          
        00:03.0 VGA compatible controller [0300]: NVIDIA Corporation TU102GL [Quadro RTX 6000/8000] [10de:1e30] (rev a1) (prog-if 00 [VGA controller])
            Subsystem: NVIDIA Corporation Quadro RTX 6000 [10de:12ba]
        
        

        Note

        Depending on your distribution, you may need to install lspci manually first. On current CentOS and Fedora systems, you can install this utility with the following command:

        sudo yum install pciutils
        
      3. Move on to the next section to install the dependencies that NVIDIA’s drivers rely on.

      Install NVIDIA Driver Dependencies

      Prior to installing the driver, you should install the required dependencies. Listed below are commands for installing these packages on many popular distributions.

      1. Find your Linode’s distribution from the list below and install the NVIDIA driver’s dependencies:

        Ubuntu 18.04

        sudo apt-get install build-essential
        

        Debian 9

        sudo apt-get install build-essential
        sudo apt-get install linux-headers-`uname -r`
        

        CentOS 7

        sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r)
        sudo yum install wget
        sudo yum -y install gcc
        

        OpenSUSE

        zypper install gcc
        zypper install kernel-source
        
      2. After installing the dependencies, reboot your Linode from the Cloud Manager. Rebooting will ensure that any newly installed kernel headers are available for use.

      NVIDIA Driver Installation

      After installing the required dependencies for your Linux distribution, you are ready to install the NVIDIA driver. If you are using Ubuntu 18.04, CentOS 7, and OpenSUSE, proceed to the Install with CUDA section. If you are using Debian 9, proceed to the Install Manually section.

      Install with CUDA

      In this section, you will install your GPU driver using NVIDIA’s CUDA Toolkit.
      For a full list of native Linux distribution support in CUDA, see the CUDA toolkit documentation.

      1. Visit the CUDA Downloads Page and navigate to the Select Target Platform section.

      2. Provide information about your target platform by following the prompts and selecting the appropriate options. Once complete, you will gain access to the correct download link for the CUDA Toolkit installer. Use the table below for guidance on how to respond to each prompt:

        PromptSelection
        Operating SystemLinux
        Architecturex86_64
        DistributionYour Linode’s distribution
        VersionYour distribution’s version
        Installer typerunfile (local)

        A completed set of selections will resemble the example:

        CUDA Downloads Page - Select Target Platform

      3. A Download Installer section will appear below the Select Target Platform section. The green Download button in this section will link to the installer file. Copy this link to your computer’s clipboard:

        Copy Download Link

      4. On your Linode, enter the wget command and paste in the download link you copied. This example shows the syntax for the command, but you should make sure to use the download link appropriate for your Linode:

        wget https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.168_418.67_linux.run
        
      5. After wget completes, run your version of the installer script to begin the installation process:

        sudo sh cuda_*_linux.run
        

        Note

        The installer will take a few moments to run before generating any output.

      6. Read and accept the License Agreement.

      7. Choose to install the CUDA Toolkit in its entirety or partially. To use your GPU, you only need to install the driver. Optionally, you can choose to install the full toolkit to gain access to a set of tools that will empower you to create GPU-accelerated applications.

        To only install the driver, uncheck all options directly below the Driver option. This will result in your screen resembling the following:

        Cuda Installer

      8. Once you have checked your desired options, select Install to begin the installation. A full install will take several minutes to complete.

        Note

        Installation on CentOS and Fedora will fail following this step, because the installer requires a reboot to fully remove the default Nouveau driver. If you are running either of these operating systems, reboot the Linode, run the installer again, and your installation will be successful.

      9. When the installation has completed, run the nvidia-smi command to make sure that you’re currently using your NVIDIA GPU device with its associated driver:

        nvidia-smi
        

        You should see a similar output:

        +-----------------------------------------------------------------------------+
        | NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
        |-------------------------------+----------------------+----------------------+
        | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
        | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
        |===============================+======================+======================|
        |   0  Quadro RTX 6000     Off  | 00000000:00:03.0 Off |                  Off |
        | 34%   57C    P0    72W / 260W |      0MiB / 24190MiB |      0%      Default |
        +-------------------------------+----------------------+----------------------+
        
        +-----------------------------------------------------------------------------+
        | Processes:                                                       GPU Memory |
        |  GPU       PID   Type   Process name                             Usage      |
        |=============================================================================|
        |  No running processes found                                                 |
        +-----------------------------------------------------------------------------+
        

        In the output, you can see that the driver is installed and functioning correctly, the version of CUDA attributed to it, and other useful statistics.

      Install Manually

      This section will walk you through the process of downloading and installing the latest NVIDIA driver on Debian 9. This process can also be completed on another distribution of your choice, if needed:

      1. Visit NVIDIA’s Driver Downloads Page.

      2. Make sure that the options from the drop-down menus reflect the following values:

        PromptSelection
        Product TypeQuadro
        Product SeriesQuadro RTX Series
        ProductQuadro RTX 8000
        Operating SystemLinux 64-bit
        Download TypeLinux Long Lived Driver
        LanguageEnglish (US)

        The form will look as follows when completed:

        NVIDIA Drivers Download Form

      3. Click the Search button, and a page will appear that shows information about the driver. Click the green Download button on this page. The file will not download to your computer; instead, you will be taken to another download confirmation page.

      4. Copy the link for the driver installer script from the green Download button on this page:

        Copy Download Link

      5. On your Linode, enter the wget command and paste in the download link you copied. This example shows the syntax for the command, but you should make sure to use the download link you copied from NVIDIA’s site:

        wget http://us.download.nvidia.com/XFree86/Linux-x86_64/430.26/NVIDIA-Linux-x86_64-430.26.run
        
      6. After wget completes, run your version of the installer script on your Linode. Follow the prompts as necessary:

        sudo bash NVIDIA-Linux-x86_64-*.run
        
      7. Select OK and Yes for all prompts as they appear.

      8. Once the installer has completed, use nvidia-smi to make sure that you’re currently using your NVIDIA GPU with its associated driver:

        nvidia-smi
        

        You should see a similar output:

        +-----------------------------------------------------------------------------+
        | NVIDIA-SMI 430.26       Driver Version: 430.26       CUDA Version: 10.2     |
        |-------------------------------+----------------------+----------------------+
        | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
        | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
        |===============================+======================+======================|
        |   0  Quadro RTX 6000     Off  | 00000000:00:03.0 Off |                  Off |
        | 34%   59C    P0     1W / 260W |      0MiB / 24220MiB |      6%      Default |
        +-------------------------------+----------------------+----------------------+
        
        +-----------------------------------------------------------------------------+
        | Processes:                                                       GPU Memory |
        |  GPU       PID   Type   Process name                             Usage      |
        |=============================================================================|
        |  No running processes found                                                 |
        +-----------------------------------------------------------------------------+
        

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Use Cases for Linode GPU Instances


      Updated by Linode

      Written by Linode

      What are GPUs?

      GPUs (Graphical Processing Units) are specialized hardware originally created to manipulate computer graphics and image processing. GPUs are designed to process large blocks of data in parallel making them excellent for compute intensive tasks that require thousands of simultaneous threads. Because a GPU has significantly more logical cores than a standard CPU, it can perform computations that process large amounts of data in parallel, more efficiently. This means GPUs accelerate the large calculations that are required by big data, video encoding, AI, and machine learning.

      The Linode GPU Instance

      Linode GPU Instances include NVIDIA Quadro RTX 6000 GPU cards with Tensor, ray tracing (RT), and CUDA cores. Read more about the NVIDIA RTX 6000 here.

      Use Cases

      Machine Learning and AI

      Machine learning is a powerful approach to data science that uses large sets of data to build prediction algorithms. These prediction algorithms are commonly used in “recommendation” features on many popular music and video applications, online shops, and search engines. When you receive intelligent recommendations tailored to your own tastes, machine learning is often responsible. Other areas where you might find machine learning being used is in self-driving cars, process automation, security, marketing analytics, and health care.

      AI (Artificial Intelligence) is a broad concept that describes technology designed to behave intelligently and mimic the cognitive functions of humans, like learning, decision making, and speech recognition. AI uses large sets of data to learn and adapt in order to achieve a specific goal. GPUs provide the processing power needed for common AI and machine learning tasks like input data preprocessing and model building.

      Below is a list of common tools used for machine learning and AI that can be installed on a Linode GPU instance:

      • TensorFlow – a free, open-source, machine learning framework, and deep learning library. Tensorflow was originally developed by Google for internal use and later fully released to the public under the Apache License.

      • PyTorch – a machine learning library for Python that uses the popular GPU optimized Torch framework.

      • Apache Mahout – a scalable library of machine learning algorithms, and a distributed linear algebra framework designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms.

      Big Data

      Big data is a discipline that analyzes and extracts meaningful insights from large and complex data sets. These sets are so large and complex that they require specialized software and hardware to appropriately capture, manage, and process the data. When thinking of big data and whether or not the term applies to you, it often helps to visualize the “three Vs”:

      • Volume: Generally, if you are working with terabytes, exabytes, petabytes, or more amounts of information you are in the realm of big data.

      • Velocity: With Big Data, you’re using data that is being created, called, moved, and interacted with at a high velocity. One example is the real time data generated on social media platforms by its users.

      • Variety: Variety refers to the many different types of data formats with which you may need to interact. Photos, video, audio, and documents can all be written and saved in a number of different formats. It is important to consider the variety of data that you will collect in order to appropriately categorize it.

      GPUs can help give Big Data systems the additional computational capabilities they need for ideal performance. Below are a few examples of tools which you can use for your own big data solutions:

      • Hadoop – an Apache project that allows the creation of parallel processing applications on large data sets, distributed across networked nodes.

      • Apache Spark – a unified analytics engine for large-scale data processing designed with speed and ease of use in mind.

      • Apache Storm – a distributed computation system that processes streaming data in real time.

      Video Encoding

      Video Encoding is the process of taking a video file’s original source format and converting it to another format that is viewable on a different device or using a different tool. This resource intensive task can be greatly accelerated using the power of GPUs.

      • FFmpeg – a popular open-source multimedia manipulation framework that supports a large number of video formats.

      General Purpose Computing using CUDA

      CUDA (Compute Unified Device Architecture) is a parallel computing platform and API that allows you to interact more directly with the GPU for general purpose computing. In practice, this means that a developer can write code in C, C++, or many other supported languages utilizing their GPU to create their own tools and programs.

      If you’re interested in using CUDA on your GPU Linode, see the following resources:

      Graphics Processing

      One of the most traditional use cases for a GPU is graphics processing. Transforming a large set of pixels or vertices with a shader or simulating realistic lighting via ray tracing are massive parallel processing tasks. Ray tracing is a computationally intensive process that simulates lights in a scene and renders the reflections, refractions, shadows, and indirect lighting. It’s impossible to do on GPUs in real-time without hardware-based ray tracing acceleration. The Linode GPU Instances offers real-time ray tracing capabilities using a single GPU.

      New to the NVIDIA RTX 6000 are the following shading enhancements:

      • Mesh shading models for vertex, tessellation, and geometry stages in the graphics pipeline
      • Variable Rate Shading to dynamically control shading rate
      • Texture-Space Shading which utilizes a private memory held texture space
      • Multi-View Rendering allowing for rendering multiple views in a single pass.

      Where to Go from Here

      If you are ready to get started with Linode GPU, our Getting Started with Linode GPU Instances guide walks you through deploying a Linode GPU Instance and installing the GPU drivers so that you can best utilize the use cases you’ve read in this guide.

      To see the extensive array of Docker container applications available, check out NVIDIA’s site. Note: To access some of these projects you need an NGC account.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link