[#388454] p11 EPERM (try 2) tinyllamas-gguf.git=0-alt1 llama.cpp.git=5753-alt1

Girar awaiter (vt) girar-builder at altlinux.org
Mon Jun 30 01:58:48 MSK 2025


https://git.altlinux.org/tasks/388454/logs/events.2.1.log
https://packages.altlinux.org/tasks/388454

subtask  name             aarch64  i586  x86_64
    #40  tinyllamas-gguf       37    21      21
   #100  llama.cpp          12:41     -    7:52

2025-Jun-29 22:38:41 :: task #388454 for p11 resumed by vt:
2025-Jun-29 22:38:41 :: message: update
#40 build 0-alt1 from /gears/t/tinyllamas-gguf.git fetched at 2025-Jun-29 22:38:31 from sisyphus
#100 build 5753-alt1 from /gears/l/llama.cpp.git fetched at 2025-Jun-29 22:36:16 from sisyphus
2025-Jun-29 22:38:41 :: created build repo
2025-Jun-29 22:38:42 :: #100: force rebuild
2025-Jun-29 22:38:43 :: [aarch64] #40 tinyllamas-gguf.git 0-alt1: build start
2025-Jun-29 22:38:43 :: [i586] #40 tinyllamas-gguf.git 0-alt1: build start
2025-Jun-29 22:38:43 :: [x86_64] #40 tinyllamas-gguf.git 0-alt1: build start
2025-Jun-29 22:39:04 :: [i586] #40 tinyllamas-gguf.git 0-alt1: build OK
2025-Jun-29 22:39:04 :: [i586] #100 llama.cpp.git 5753-alt1: build start
2025-Jun-29 22:39:04 :: [x86_64] #40 tinyllamas-gguf.git 0-alt1: build OK
2025-Jun-29 22:39:05 :: [x86_64] #100 llama.cpp.git 5753-alt1: build start
2025-Jun-29 22:39:17 :: [i586] #100 llama.cpp.git 5753-alt1: build SKIPPED
2025-Jun-29 22:39:20 :: [aarch64] #40 tinyllamas-gguf.git 0-alt1: build OK
2025-Jun-29 22:39:21 :: [aarch64] #100 llama.cpp.git 5753-alt1: build start
build/100/x86_64/log:[00:04:14] debuginfo.req: WARNING: /usr/lib64/libcublas.so.12 is not yet debuginfo-enabled
build/100/x86_64/log:[00:04:14] debuginfo.req: WARNING: /usr/lib64/libcudart.so.12 is not yet debuginfo-enabled
2025-Jun-29 22:46:57 :: [x86_64] #100 llama.cpp.git 5753-alt1: build OK
2025-Jun-29 22:52:02 :: [aarch64] #100 llama.cpp.git 5753-alt1: build OK
2025-Jun-29 22:52:09 :: #40: tinyllamas-gguf.git 0-alt1: build check OK
2025-Jun-29 22:52:29 :: #100: llama.cpp.git 5753-alt1: build check OK
2025-Jun-29 22:52:31 :: build check OK
2025-Jun-29 22:52:58 :: noarch check OK
2025-Jun-29 22:53:00 :: plan: src +2 -1 =19694, aarch64 +8 -2 =34708, noarch +1 -0 =20811, x86_64 +10 -2 =35476
#100 llama.cpp 20240225-alt1 -> 1:5753-alt1
 Wed Jun 25 2025 Vitaly Chikunov <vt at altlinux> 1:5753-alt1
 - Update to b5753 (2025-06-24).
 - Install an experimental rpc backend and server. The rpc code is a
   proof-of-concept, fragile, and insecure.
 Sat May 10 2025 Vitaly Chikunov <vt at altlinux> 1:5332-alt1
 - Update to b5332 (2025-05-09), with vision support in llama-server.
 - Enable Vulkan backend (for GPU) in llama.cpp-vulkan package.
 Mon Mar 10 2025 Vitaly Chikunov <vt at altlinux> 1:4855-alt1
 - Update to b4855 (2025-03-07).
 - Enable CUDA backend (for NVIDIA GPU) in llama.cpp-cuda package.
 [...]
2025-Jun-29 22:53:01 :: llama.cpp: closes bugs: 50962
2025-Jun-29 22:53:39 :: patched apt indices
2025-Jun-29 22:53:48 :: created next repo
2025-Jun-29 22:53:58 :: duplicate provides check OK
2025-Jun-29 22:54:34 :: dependencies check OK
2025-Jun-29 22:55:03 :: [x86_64 aarch64] ELF symbols check OK
2025-Jun-29 22:55:16 :: [i586] #40 tinyllamas-gguf: install check OK
2025-Jun-29 22:55:18 :: [x86_64] #100 libllama: install check OK
2025-Jun-29 22:55:26 :: [x86_64] #100 libllama-debuginfo: install check OK
2025-Jun-29 22:55:27 :: [aarch64] #100 libllama: install check OK
	x86_64: libllama-devel=1:5753-alt1 post-install unowned files:
 /usr/lib64/cmake
2025-Jun-29 22:55:32 :: [x86_64] #100 libllama-devel: install check OK
2025-Jun-29 22:55:40 :: [aarch64] #100 libllama-debuginfo: install check OK
	aarch64: libllama-devel=1:5753-alt1 post-install unowned files:
 /usr/lib64/cmake
2025-Jun-29 22:55:51 :: [aarch64] #100 libllama-devel: install check OK
2025-Jun-29 22:55:56 :: [x86_64] #100 llama.cpp: install check OK
2025-Jun-29 22:56:05 :: [x86_64] #100 llama.cpp-cpu: install check OK
2025-Jun-29 22:56:05 :: [aarch64] #100 llama.cpp: install check OK
2025-Jun-29 22:56:20 :: [aarch64] #100 llama.cpp-cpu: install check OK
2025-Jun-29 22:56:25 :: [x86_64] #100 llama.cpp-cpu-debuginfo: install check OK
2025-Jun-29 22:56:49 :: [x86_64] #100 llama.cpp-cuda: install check OK
2025-Jun-29 22:56:51 :: [aarch64] #100 llama.cpp-cpu-debuginfo: install check OK
2025-Jun-29 22:57:06 :: [aarch64] #100 llama.cpp-vulkan: install check OK
2025-Jun-29 22:57:15 :: [x86_64] #100 llama.cpp-cuda-debuginfo: install check OK
2025-Jun-29 22:57:24 :: [aarch64] #100 llama.cpp-vulkan-debuginfo: install check OK
2025-Jun-29 22:57:24 :: [x86_64] #100 llama.cpp-vulkan: install check OK
2025-Jun-29 22:57:34 :: [aarch64] #40 tinyllamas-gguf: install check OK
2025-Jun-29 22:57:35 :: [x86_64] #100 llama.cpp-vulkan-debuginfo: install check OK
2025-Jun-29 22:57:41 :: [x86_64] #40 tinyllamas-gguf: install check OK
2025-Jun-29 22:58:00 :: [x86_64-i586] generated apt indices
2025-Jun-29 22:58:00 :: [x86_64-i586] created next repo
2025-Jun-29 22:58:10 :: [x86_64-i586] dependencies check OK
2025-Jun-29 22:58:12 :: gears inheritance check OK
2025-Jun-29 22:58:13 :: srpm inheritance check OK
girar-check-perms: access to tinyllamas-gguf DENIED for vt: project `tinyllamas-gguf' is not listed in the acl file for repository `p11', and the policy for such projects in `p11' is to deny
check-subtask-perms: #40: tinyllamas-gguf: needs approvals from members of @maint and @tester groups
girar-check-perms: access to llama.cpp DENIED for vt: project `llama.cpp' is not listed in the acl file for repository `p11', and the policy for such projects in `p11' is to deny
check-subtask-perms: #100: llama.cpp: needs approvals from members of @maint and @tester groups
2025-Jun-29 22:58:15 :: acl check FAILED
2025-Jun-29 22:58:36 :: created contents_index files
2025-Jun-29 22:58:44 :: created hash files: aarch64 noarch src x86_64
2025-Jun-29 22:58:48 :: task #388454 for p11 EPERM


More information about the Girar-builder-p11 mailing list