파이썬 tensorflow - Dst tensor is not initialized. 오류 메시지
이런 오류가 발생한다면?
2021-08-29 12:45:04.918972: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-29 12:45:05.625247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 6643 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1
2021-08-29 12:45:06.153571: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/10
994/1000 [============================>.] - ETA: 0s - loss: 1.1682 - accuracy: 0.88972021-08-29 12:45:23.530835: W tensorflow/core/common_runtime/bfc_allocator.cc:457] Allocator (GPU_0_bfc) ran out of memory trying to allocate 7.48GiB (rounded to 8028160000)requested by op _EagerConst
If the cause is memory fragmentation maybe the environment variable 'TF_GPU_ALLOCATOR=cuda_malloc_async' will improve the situation.
Current allocation summary follows.
Current allocation summary follows.
2021-08-29 12:45:23.532022: I tensorflow/core/common_runtime/bfc_allocator.cc:1004] BFCAllocator dump for GPU_0_bfc
2021-08-29 12:45:23.532270: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (256): Total Chunks: 35, Chunks in use: 35. 8.8KiB allocated for chunks. 8.8KiB in use in bin. 480B client-requested in use in bin.
2021-08-29 12:45:23.532546: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (512): Total Chunks: 3, Chunks in use: 3. 1.5KiB allocated for chunks. 1.5KiB in use in bin. 1.5KiB client-requested in use in bin.
...[생략]...
2021-08-29 12:45:23.581168: I tensorflow/core/common_runtime/bfc_allocator.cc:1068] 1 Chunks of size 156800000 totalling 149.54MiB
2021-08-29 12:45:23.581357: I tensorflow/core/common_runtime/bfc_allocator.cc:1072] Sum Total of in-use chunks: 188.12MiB
2021-08-29 12:45:23.581524: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] total_region_allocated_bytes_: 6966018048 memory_limit_: 6966018048 available bytes: 0 curr_region_allocation_bytes_: 13932036096
2021-08-29 12:45:23.581776: I tensorflow/core/common_runtime/bfc_allocator.cc:1080] Stats:
Limit: 6966018048
InUse: 197255168
MaxInUse: 197255168
NumAllocs: 37070
MaxAllocSize: 156800000
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0
2021-08-29 12:45:23.582270: W tensorflow/core/common_runtime/bfc_allocator.cc:468] ***_________________________________________________________________________________________________
Traceback (most recent call last):
...[생략]...
File "E:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 106, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
tensorflow.python.framework.errors_impl.InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not initialized.
Process finished with exit code 1
원인은 메모리 부족일 수 있고, dataset 크기를 줄여서 넣으면 잘 될 것입니다. ^^ 그런데, 경우에 따라 잘못된 차원의 dataset을 넣었을 때도 발생하기도 하므로 "
파이썬 tensorflow - ValueError: Shapes (...) and (...) are incompatible"에서처럼 데이터셋이 정상적인 차원을 가지고 있는지 검사할 필요도 있습니다.
참고로, 메시지 중간에 보면 이런 권고 사항이 있는데요,
W tensorflow/core/common_runtime/bfc_allocator.cc:457] Allocator (GPU_0_bfc) ran out of memory trying to allocate 7.48GiB (rounded to 8028160000)requested by op _EagerConst
If the cause is memory fragmentation maybe the environment variable 'TF_GPU_ALLOCATOR=cuda_malloc_async' will improve the situation.
Current allocation summary follows.
실제로 이것을 설정하고,
os.putenv('TF_GPU_ALLOCATOR', 'cuda_malloc_async')
실행해 봤지만, 다음과 같이 Async allocator를 사용하겠다는 메시지만 나온 후 잠시 후에 비정상 종료를 했습니다.
2021-08-29 13:36:43.083542: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-29 13:36:43.726382: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:215] Using CUDA malloc Async allocator for GPU: 0
Process finished with exit code -1073740940 (0xC0000374)
그냥, 메모리 소비를 줄이는 것이 답인 듯합니다. ^^
[이 글에 대해서 여러분들과 의견을 공유하고 싶습니다. 틀리거나 미흡한 부분 또는 의문 사항이 있으시면 언제든 댓글 남겨주십시오.]