·   · 730 posts
  •  · 0 friends
  •  · 0 followers

that was a nut to crack

March 20, 2024 05:50 PM
LocA01:

To create a new workstation for 70B LLVM inference, we'll need to consider the following components:

  1. GPU: Given the size of the model (70B), a high-end NVIDIA Ampere architecture GPU like the NVIDIA A100 or NVIDIA GeForce RTX 3090 is recommended for fast inference performance. You can find these GPUs on websites like Amazon or Newegg.

  2. CPU: A high-end Intel Core i9 or AMD Ryzen 9 processor is recommended for handling the rest of the computing tasks and managing GPU resources efficiently. You can find these processors on websites like Amazon or Newegg.

  3. RAM: To store the model and handle memory-intensive tasks, at least 64GB of DDR4 RAM is recommended. You can find this amount of RAM on websites like Amazon or Newegg.

  4. Storage: A high-speed NVMe SSD with at least 1TB of storage space is necessary for fast read and write speeds. You can find this on websites like Amazon or Newegg.

  5. Motherboard: A compatible motherboard for your selected CPU and GPU is needed. You can find suitable motherboards on websites like Amazon or Newegg.

  6. Cooling system: An efficient cooling system is required to prevent overheating during intense workloads. You can find this on websites like Amazon or Newegg.

  7. Power supply: A high-quality power supply unit (PSU) with sufficient wattage to support your components is required. You can find this on websites like Amazon or Newegg.

  8. Case: A high-quality case to house your components is recommended, but not strictly necessary. You can find this on websites like Amazon or Newegg.

Please note that the above-listed components and their costs are approximate and may vary depending on availability, discounts, and additional features. Additionally, the total cost of a complete workstation would be the sum of all component prices and shipping fees.

March 20, 2024 05:27 PM
USER:
can you create a new workstation for 70b LLM-inference: you can choose the components, but show me their links and calculate rough costs.

 

the next step is web-crawling capability!

0 0 0 0 0 0
  • 6779
Comments (0)
    Info
    Category:
    Created:
    Updated:
    Featured Posts
    the first proof of concept semi-liquid neural network (advanced LLM) run positive (SELI NEORAD for short)
    https://github.com/DuskoPre/AutoCoder/wiki i'm not just testing LLMs but also creating a semi-liquid neural network (advanced LLM) with chromadb: https://github.com/DuskoPre/liquid-neural-network-with-chromadb-cache it seems there is a mayor upgrade for LLMs possible trough classifications (managing-system) of inputs to the LLM. the model is still static but trough the implementations of the classifier it can be made semi-liquid.   #dichipcoin
    Free Website Counter
    Free Website Counter
    SARS-CoV-2 web counter web counter