[#390527] [test-only] FAILED (try 6) gpt4all.git=3.10.0-alt1

Girar awaiter (esgor) girar-builder at altlinux.org
Tue Jul 22 15:57:37 MSK 2025


https://git.altlinux.org/tasks/390527/logs/events.6.1.log
https://packages.altlinux.org/tasks/390527

subtask  name     aarch64  i586  x86_64
   #600  gpt4all        -     -  failed

2025-Jul-22 12:45:28 :: test-only task #390527 for sisyphus resumed by esgor:
#100 removed
#200 removed
#300 removed
#400 removed
#500 removed
#600 build 3.10.0-alt1 from /people/esgor/packages/gpt4all.git fetched at 2025-Jul-22 12:45:12
2025-Jul-22 12:45:30 :: [aarch64] #600 gpt4all.git 3.10.0-alt1: build start
2025-Jul-22 12:45:30 :: [x86_64] #600 gpt4all.git 3.10.0-alt1: build start
2025-Jul-22 12:45:30 :: [i586] #600 gpt4all.git 3.10.0-alt1: build start
2025-Jul-22 12:45:41 :: [i586] #600 gpt4all.git 3.10.0-alt1: build SKIPPED
2025-Jul-22 12:45:49 :: [aarch64] #600 gpt4all.git 3.10.0-alt1: build SKIPPED
[x86_64] cd /usr/src/RPM/BUILD/gpt4all-3.10.0/gpt4all-chat/x86_64-alt-linux/llmodel && /usr/bin/nvcc -forward-unknown-to-host-compiler -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_CUDA -DGGML_USE_LLAMAFILE -DK_QUANTS_PER_ITERATION=2 -D_GNU_SOURCE -D_XOPEN_SOURCE=600 --options-file CMakeFiles/ggml-mainline-cuda.dir/includes_CUDA.rsp -std=c++11 "--generate-code=arch=compute_50,code=[compute_50,sm_50]" "--generate-code=arch=compute_52,code=[compute_52,sm_52]" "--generate-code=arch=compute_61,code=[compute_61,sm_61]" "--generate-code=arch=compute_70,code=[compute_70,sm_70]" "--generate-code=arch=compute_75,code=[compute_75,sm_75]" -Xcompiler=-fPIC -use_fast_math -Xcompiler "-Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -Wno-pedantic -mf16c -mfma -mavx2" -MD -MT llmodel/CMakeFiles/ggml-mainline-cuda.dir/deps/llama.cpp-mainline/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o -MF CMakeFiles/ggml-mainline-cuda.dir/deps/llama.cpp-mainline/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o.d -x cu -c /usr/src/RPM/BUILD/gpt4all-3.10.0/gpt4all-backend/deps/llama.cpp-mainline/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu -o CMakeFiles/ggml-mainline-cuda.dir/deps/llama.cpp-mainline/ggml/src/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.o
[x86_64] nvcc warning : Support for offline compilation for architectures prior to '<compute/sm/lto>_75' will be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
[x86_64] gmake[2]: Leaving directory '/usr/src/RPM/BUILD/gpt4all-3.10.0/gpt4all-chat/x86_64-alt-linux'
[x86_64] gmake[1]: Entering directory '/usr/src/RPM/BUILD/gpt4all-3.10.0/gpt4all-chat/x86_64-alt-linux'
[x86_64] [ 68%] Built target ggml-mainline-cuda
[x86_64] gmake[1]: Leaving directory '/usr/src/RPM/BUILD/gpt4all-3.10.0/gpt4all-chat/x86_64-alt-linux'
[x86_64] gmake: *** [Makefile:156: all] Error 2
[x86_64] error: Bad exit status from /usr/src/tmp/rpm-tmp.34335 (%build)
[x86_64] RPM build errors:
[x86_64]     File /usr/src/RPM/SOURCES/gpt4all-3.10.0-alt.patch is smaller than 8 bytes
2025-Jul-22 12:57:36 :: [x86_64] gpt4all.git 3.10.0-alt1: remote: build failed
2025-Jul-22 12:57:36 :: [x86_64] #600 gpt4all.git 3.10.0-alt1: build FAILED
2025-Jul-22 12:57:36 :: [x86_64] requesting cancellation of task processing
2025-Jul-22 12:57:36 :: [x86_64] build FAILED
2025-Jul-22 12:57:36 :: task #390527 for sisyphus FAILED


More information about the Sisyphus-incominger mailing list