about summary refs log tree commit diff
path: root/pkgs/by-name/ll
AgeCommit message (Collapse)AuthorFilesLines
5 daysMerge pull request #323056 from SomeoneSerge/fix/cudaPackages/outputSpecifiedSomeone1-7/+3
cudaPackages: make getOutput work again
7 daystreewide: cuda: use propagatedBuildInputs, lib.getOutputSomeone Serge1-7/+3
9 daysllama-cpp: 3091 -> 3260Lan Tian1-20/+19
2024-06-23treewide: use cmakeCudaArchitecturesStringJeremy Schlatter1-6/+1
2024-06-09llama-cpp: 3089 -> 3091R. Ryantm1-2/+2
2024-06-06llama-cpp: 3070 -> 3089Jono Chang1-2/+2
Diff: https://github.com/ggerganov/llama.cpp/compare/b3070..b3089 Changelog: https://github.com/ggerganov/llama.cpp/releases/tag/b3089
2024-06-03llama-cpp: 3015 -> 3070R. Ryantm1-2/+2
2024-06-02Merge pull request #315258 from r-ryantm/auto-update/llama-cppOTABI Tomoya1-2/+2
llama-cpp: 2953 -> 3015
2024-06-01Merge pull request #313525 from maxstrid/llama-cpp-rpcPeder Bergebakken Sundt1-6/+9
llama-cpp: Add rpc and remove mpi support
2024-05-28llama-cpp: 2953 -> 3015R. Ryantm1-2/+2
2024-05-24llm-ls: 0.5.2 -> 0.5.3R. Ryantm1-3/+3
2024-05-21llama-cpp: Add rpc and remove mpi supportMaxwell Henderson1-6/+9
llama-cpp no longer supports mpi and rpc is the recommended alternative. See: https://github.com/ggerganov/llama.cpp/pull/7395 Signed-off-by: Maxwell Henderson <mxwhenderson@gmail.com>
2024-05-21llama-cpp: 2901 -> 2953R. Ryantm1-2/+2
2024-05-16llama-cpp: 2843 -> 2901R. Ryantm1-2/+2
2024-05-11llama-cpp: 2781 -> 2843R. Ryantm1-2/+2
2024-05-03llama-cpp: 2746 -> 2781R. Ryantm1-2/+2
2024-04-30llama-cpp: set build_number/build_commit for version infoEnno Richter1-2/+11
2024-04-26llama-cpp: 2700 -> 2746R. Ryantm1-2/+2
2024-04-25llm-ls: 0.4.0 -> 0.5.2Roman Zakirzyanov1-3/+9
2024-04-21llama-cpp: 2674 -> 2700R. Ryantm1-2/+2
2024-04-14llama-cpp: 2636 -> 2674R. Ryantm1-2/+2
2024-04-09llama-cpp: 2589 -> 2636R. Ryantm1-2/+2
2024-04-04llama-cpp: 2568 -> 2589R. Ryantm1-2/+2
2024-03-31llama-cpp: use pkgs.autoAddDriverRunpathJonathan Ringer1-1/+2
2024-03-28llama-cpp: update from b2481 to b2568Joseph Stahl1-2/+2
2024-03-26llama-cpp: embed (don't pre-compile) metal shadersJoseph Stahl1-1/+4
port of https://github.com/ggerganov/llama.cpp/pull/6118, although compiling shaders with XCode disabled as it requires disabling sandbox (and only works on MacOS anyways)
2024-03-26llama-cpp: rename cuBLAS to CUDAJoseph Stahl1-1/+1
Matches change from upstream https://github.com/ggerganov/llama.cpp/commit/280345968dabc00d212d43e31145f5c9961a7604
2024-03-25llama-cpp: fix blasSupport (#298567)Christian Kögler1-3/+5
* llama-cpp: fix blasSupport * llama-cpp: switch from openblas to blas
2024-03-21llama-cpp: 2454 -> 2481R. Ryantm1-2/+2
2024-03-19Merge pull request #281576 from yannham/refactor/cuda-setup-hooks-refactorSomeone1-4/+1
cudaPackages: generalize and refactor setup hooks
2024-03-18llama-cpp: 2424 -> 2454R. Ryantm1-2/+2
2024-03-15cudaPackages: generalize and refactor setup hookYann Hamdaoui1-4/+1
This PR refactor CUDA setup hooks, and in particular autoAddOpenGLRunpath and autoAddCudaCompatRunpathHook, that were using a lot of code in common (in fact, I introduced the latter by copy pasting most of the bash script of the former). This is not satisfying for maintenance, as a recent patch showed, because we need to duplicate changes to both hooks. This commit abstract the common part in a single shell script that applies a generic patch action to every elf file in the output. For autoAddOpenGLRunpath the action is just addOpenGLRunpath (now addDriverRunpath), and is few line function for autoAddCudaCompatRunpathHook. Doing so, we also takes the occasion to use the newer addDriverRunpath instead of the previous addOpenGLRunpath, and rename the CUDA hook to reflect that as well. Co-Authored-By: Connor Baker <connor.baker@tweag.io>
2024-03-14llama-cpp: 2382 -> 2424R. Ryantm1-2/+2
2024-03-10llama-cpp: 2346 -> 2382R. Ryantm1-2/+2
2024-03-05llama-cpp: 2294 -> 2346R. Ryantm1-2/+2
2024-02-28llama-cpp: 2249 -> 2294; bring upstream flakehappysalada1-87/+96
2024-02-23llama-cpp: 2212 -> 2249R. Ryantm1-2/+2
2024-02-20llama-cpp: 2167 -> 2212R. Ryantm1-2/+2
2024-02-16llama-cpp: 2135 -> 2167R. Ryantm1-2/+2
2024-02-13llama-cpp: 2105 -> 2135R. Ryantm1-2/+2
2024-02-09llama-cpp: 2074 -> 2105R. Ryantm1-2/+2
2024-02-06llama-cpp: 2050 -> 2074R. Ryantm1-2/+2
2024-02-02llama-cpp: 1892 -> 2050R. Ryantm1-2/+2
2024-01-19llama-cpp: 1848->1892; add static build modehappysalada1-2/+15
2024-01-12llama-cpp: 1742 -> 1848Alex Martens1-14/+2
2024-01-13Merge pull request #278120 from r-ryantm/auto-update/llama-cppWeijia Wang1-2/+2
llama-cpp: 1710 -> 1742
2024-01-01llama-cpp: 1710 -> 1742R. Ryantm1-2/+2
2023-12-31llama-cpp: fix cuda support; integrate upstreamhappysalada1-28/+33
2023-12-29Merge pull request #277451 from accelbread/llama-cpp-updateNick Cao1-2/+2
llama-cpp: 1671 -> 1710
2023-12-28llama-cpp: 1671 -> 1710Archit Gupta1-2/+2