Device 0 out of memory что делать nbminer
Перейти к содержимому

Device 0 out of memory что делать nbminer

  • автор:

Saved searches

Use saved searches to filter your results more quickly

Cancel Create saved search

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

NebuTech / NBMiner Public

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory #95

Maas1337 opened this issue Jun 8, 2019 · 7 comments

Out of memory #95

Maas1337 opened this issue Jun 8, 2019 · 7 comments


Maas1337 commented Jun 8, 2019

[2019-06-09 00:05:09,232] INFO — | NBMiner |
[2019-06-09 00:05:09,232] INFO — | NVIDIA GPU Miner |
[2019-06-09 00:05:09,232] INFO — | 23.2 |
[2019-06-09 00:05:09,232] INFO — ———————————————-
[2019-06-09 00:05:09,232] INFO — ALGO: cuckatoo
[2019-06-09 00:05:09,232] INFO — URL: stratum+tcp://
[2019-06-09 00:05:09,232] INFO — USER:
[2019-06-09 00:05:09,985] INFO — ============= Device Information =============
[2019-06-09 00:05:09,986] INFO — * ID 0: GeForce GTX 1070 8192 MB, CC 61
[2019-06-09 00:05:09,997] INFO — * ID 1: GeForce GTX 1070 8192 MB, CC 61
[2019-06-09 00:05:10,009] INFO — * ID 2: GeForce GTX 1070 8192 MB, CC 61
[2019-06-09 00:05:10,010] INFO — * ID 3: GeForce GTX 1070 8192 MB, CC 61
[2019-06-09 00:05:10,011] INFO — * ID 4: GeForce GTX 1070 8192 MB, CC 61
[2019-06-09 00:05:10,012] INFO — * ID 5: GeForce GTX 1080 8192 MB, CC 61
[2019-06-09 00:05:10,013] INFO — * ID 6: GeForce GTX 1070 8192 MB, CC 61
[2019-06-09 00:05:10,013] INFO — ==============================================
[2019-06-09 00:05:10,109] INFO — NVML initialized.
[2019-06-09 00:05:13,112] INFO — cuckatoo — Logging in to .
[2019-06-09 00:05:13,615] INFO — cuckatoo — Login succeeded.
[2019-06-09 00:05:13,615] INFO — API:
[2019-06-09 00:05:13,616] INFO — API server started.
[2019-06-09 00:05:13,857] INFO — cuckatoo — New job from, ID: 31473003, DIFF: 1.00
[2019-06-09 00:05:15,619] INFO — Worker thread started on device 0.
[2019-06-09 00:05:16,094] FATAL — Device 0, out of memory.
Mining program unexpected exit.
Reason: Process crashed
Restart miner after 10 secs .

My specs:
Windows 10 Pro 1803
120 GB SSD
6x gtx 1070
1xgtx 1080

virtual memory: 57344 MB (7×8192)

Tried with more virtual memory (up to 75GB)

everytime i got Out of Memory (without OC).

The text was updated successfully, but these errors were encountered:

Stable Diffusion CUDA Out of Memory: How to Fix

Just want the answer? In most cases, you can fix this error by setting a lower image resolution or fewer images per generation. Or, use an app like NightCafe that runs Stable Diffusion online in the cloud so you don’t need to deal with CUDA errors at all.

One of the best AI image generators currently available is Stable Diffusion online . It’s a text-to-image technology that enables individuals to produce beautiful works of art in a matter of seconds. If you take the time to study a Stable Diffusion prompt guide , you can quickly make quality images with your computer or on the cloud, and learn what to do if you get a CUDA out-of-memory error message.

If Stable Diffusion is used locally on a computer rather than via a website or application programming interface, the machine will need to have certain capabilities to handle the program. Your graphics card is the most critical component when using Stable Diffusion because it operates almost entirely on a graphics processing unit (GPU)—and usually on a CUDA-based Nvidia GPU.

The Nvidia CUDA parallel computing platform is the foundation for thousands of GPU-accelerated applications. It is the platform of choice for developing and implementing novel deep learning and parallel computing algorithms due to CUDA’s flexibility and programmability.

What Is CUDA?

NVIDIA developed the parallel computing platform and programming language called Compute Unified Device Architecture, or CUDA. Through GPU accelerators, CUDA has assisted developers in speeding up their apps with more than twenty million downloads.

In addition to speeding up applications for high-performance computing and research, CUDA has gained widespread use in consumer and commercial ecosystems, as well as open-source AI generators such as Stable Diffusion.

What Happens With a Memory Error in Stable Diffusion?

Running Stable Diffusion on your computer may occasionally cause memory problems and prevent the model from functioning correctly. This occurs when your GPU memory allocation is exhausted. It is important to note that running Stable Diffusion requires at least four gigabytes (GB) of video random access memory (VRAM). One recommendation is a 3xxx series NVIDIA GPU that starts with six GB of VRAM. Other components of your computer, such as your central processing unit (CPU), RAM, and storage devices, are less important.

To train an AI model on a GPU, you need to differentiate labels and predictions to be accurate. To produce reliable predictions, you need both the model and the input data to be allocated in CUDA memory. A memory error occurs when the project becomes too complex to be cached in the GPU’s memory.

Each project has a specific quantity of data that needs to be uploaded, either to the VRAM (the GPU’s memory when the CUDA or RTX GPU engine resides) or the RAM (when the CPU engine operates).

GPUs typically contain a significantly smaller amount of memory than a computer’s RAM. A project may occasionally be too big and fail because it is fully uploaded to the VRAM. The geometry’s intricacy, extent to which high-resolution textures are used, render settings, and other elements can all play a part.

How to Fix a Memory Error in Stable Diffusion

One of the easiest ways to fix a memory error issue is by simply restarting the computer. If this doesn’t work, another potential remedy is to reduce the resolution. Reduce your image to 256 x 256 resolution by making an input of -W 256 -H 256 in the command line.

You can also try increasing the memory that the CUDA device has access to. You do this by modifying your system’s GPU settings. Changing the configuration file or using command-line options frequently resolves the issue.

Another option is to buy a new GPU. If you go this route, get a GPU with more memory to replace the existing GPU if VRAM is consistently causing runtime problems that other methods can’t solve.

Divide the data into smaller batches. Processing smaller sets of data may be needed to avoid memory overload. This tactic reduces overall memory utilisation and the task can be completed without running out of memory.

You can also use a new framework. If you are using TensorFlow or PyTorch, you can switch to a more memory-efficient framework.

Finally, make your code more efficient to avoid memory issues. You can decrease the data size, use more effective methods, or try other speed enhancements.

In Conclusion

The best way to solve a memory problem in Stable Diffusion will depend on the specifics of your situation, including the volume of data being processed and the hardware and software employed.

You can further enhance your creations with Stable Diffusion samplers such as k_LMS, DDIM and k_euler_a. The incredible results happen without any pre- or post-processing.

Ready to take a deep dive into the Stable Diffusion universe? Sign up for a free account on NightCafe and let your creative ideas flow.

  • Share Share on Facebook
  • Tweet Tweet on Twitter
  • Pin it Pin on Pinterest

Solving the “RuntimeError: CUDA Out of memory” error

Now you might be wondering, how I got here and why this blog post has such a bland, un-enticing non clickbait-y title. Why is this not an ultimate guide to something something basic? I’ll tell you. This string is what I would enter on, are you still using google?), when I’m completely devoid of hope and praying to an imaginary deity whose existence I question, to make my deep learning models run on GPU. You can dread it, run from it, but this CUDA issue still arrives.

So here’s to hoping that your prayer will be answered when you find this post.�� Right off the bat, you’ll need try these recommendations, in increasing order of code changes

Out of these options the one with the most ease and likelihood to work for you, if you’re using a pretrained model, is the first one

Changing the batchsize

If you are running some pre-existing code or model architecture, your best move is to reduce the batch size. Cut down to half and keep chopping it down till you don’t get the error.

However , if in this endeavor you find yourself setting batch size to 1 and it still won’t help, then there are other problems and higher batch sizes can work if you fix it.

Lower the Precision

Now if you’re working with Pytorch-Lightning, like you should be, you might also try changing the precision to `float16`. This might come with problems like a mismatch between the expected Double and getting a Float tensor, but it is memory…

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *